Essix Reloaded – part 1: test setup


You might have missed that new kid on the block, Essix (pronounce: S6), an essential simple secure stable scalable stateless server for HTTP resources. It’s my take at a just-what-you-need approach to the things-you-need-in-every-project; a resilient barebone backbone for whatever you find is still missing on the web. Maybe you missed it because it was lacking some rubber-hit-road existential litmus testing. Then read on to see how that got fixed.

The test case presented itself to me in the form of a petition running in the Netherlands about all the damage and suffering caused by decades-long greedily drilling for natural gas in one relatively peripheral part of the country. Each time the subject was brought up in one of the nightly talk shows on TV, some 10 to 20 thousand more people signed the petition, but each time, that was despite of their website crashing severely under the sudden load. So I rebuilt that petition page in Essix, put it on the rack, and tightened the thumbscrews.

In this first part, I describe:

  • how the test site was built
  • problems encountered
  • decisions taken, and their motivation
  • features added to the underlying packages
  • the composition of the actual load test
  • how to pull up a computing environment to run the tests

A second part will follow, describing the test results.

Test setup

To set up the test site, I installed Essix,

$ go get -u

initialised a new app,

$ essix init

took the html from, stripped it down to its bare bones, solidified it in a template, defined some messages, designed a data model, defined some routes & their handlers, and styled the thing up.

Since this project is also arguably the first serious drive with Essix altogether, I ran into some things that are worth mentioning here:


Multi-language entity fields

Though multi-language “messages” were an integral part of Essix already, using the message type as field values in the entity data model is new, eg:

groningen := model.InitPetition("groningen")
groningen.Caption = msg.New().
 Set("nl", "Laat Groningen niet zakken").
 Set("en", "Don’t let Groningen down")


The representation of a specific petition includes a statement on the current number of signatures for that petition. To keep that statement fresh, we could update the signature count for a petition with each new signature. Then again, on a high signing load, that counter will be hot like hell. If there must be one main performance bottleneck, it really shouldn’t be this trivial counter thing.

One part of the fix is to only update an in-memory variable on each signature’s confirmation, and to spin off a parallel goroutine that persists the new counts in the entity model at a regular interval. Since that will lead to inconsistencies when stopping application service instances, there’s an accompanying synchronisation resource, which enables scheduling (e.g. nightly) of the relatively costly operation of actually counting all confirmed signatures for each petition.

Multiple tables for same entity type

Another fix for the counting problem is the new ability to use different database tables for one and the same entity type; in this case: one separate table of signatures for each specific petition record. The signatures table is registered when a new petition record is saved.

Confirmation emails

Gotcha! 😀 This is all copy & paste from basic Essix 😀 👊 💫


Progressive enhancement

Did anyone mention it had to be resilient? The core functionality stands in just plain HTML + HTTP:

Schermafbeelding 2017-05-09 om 11.18.46
Look mama, no styles!

Admittedly, with support for the CSS style sheet, things improve considerably. Even without a single line of JavaScript:

Schermafbeelding 2017-05-09 om 14.23.58.png
Look mama, no JavaScript!

There is some JavaScript involved though, mainly about the signature form. The tricky thing there is that the signature form isn’t really part of the petition resource.


A GET request to the /signature resource renders the form that can POST the data to sign a petition. That same form is shown as part of the representation of a /petition resource, as seen above. One can of course cheat a bit, and just include the signature form as an integral part of the petition page. A major downside then is that submitting the form will result in a refresh of the entire page.

Luckily, HTML provides a nice solution for this out of the box: with the iframe element, we position an inline frame on the /petition page, that issues a separate request for the /signature resource, and shows the resulting form. When the form is posted, the /petition page remains untouched, while the iframe renders the /signature response:

Schermafbeelding 2017-05-09 om 21.39.44.png
iframe on /petition page, showing POST /signature response

As a fallback for (ancient) browsers that don’t support the iframe element, the signature form is included inline. In that case, if JavaScript is available, we prevent the full page refresh by intercepting the form submit, sending the POST request through Ajax, plucking the body contents out of the response, and putting that in the place of the form element.

Note that the wildly popular Ajax solution is not the first thing we turn to, but some kind of a last resort – we won’t depend on JavaScript if we don’t have to.

This whole package is sitting on the /petition template. There’s also some dynamic resizing of the iframe’s height going on, to enable its sticky positioning.

CSS prefixes

In the stylesheet, we only declare standard CSS properties and values. On build, Essix adds any needed vendor prefixes using autoprefixer.

Remote debugging

To see what works and not on mobile, I use Xcode’s Simulator + Safari Web Inspector for iOS:

Schermafbeelding 2017-05-09 om 22.49.55.png
iOS Simulator

And Android Studio’s Emulator + Chrome DevTools for Android:

Schermafbeelding 2017-05-10 om 10.01.31
Android Emulator

Load test

One or two extra things need to be taken care of, before the application is prepared for testing:


The /provision resource is used to initially populate the petition, and some considerable amount of signatures in the database, in a reasonable amount of time. Before each test run, we hit /provision to delete the signatures from the previous run.


On a successful submit of the signature form, the signature is saved in the database, and an email is sent, asking to confirm the signature. For a successful confirmation, a token value is needed, which is part of the link in the email, and tested against the token saved with the signature in the database. This is to invalidate signatures submitted with other people’s email addresses.

The load test should include confirmation to be comparable with a real world load, but it won’t try to manage several thousands of email account to receive the confirmation tokens. Instead, just for the purpose of load testing, the confirmation token is returned in the signature response, but only if the GO_ENV environment variable is set to “test”.

Though the emails from the test generated signatures aren’t read, the server will send the messages out to the mail server, so the work the server does for each signature in test isn’t any less than in production.

Rate limit

In real life, we’d need to set a rate limit on posting a signature, to prevent bots from loading our database with bogus. Luckily, in Essix, that’s just a matter of passing the handler function to ratelimit.Handle(). For the load test, the rate limit is bypassed by setting the RATELIMIT environment variable, that normally sets the default timeout in seconds, to “0”.

Test definition

The aim is to see how many new signatures we can support without service interruption, and what configuration supports the highest load. To sign a petition, a user would: load the petition page, fill in the signature form, check their email, click the link to load the confirmation form, and submit their confirmation. The test should run many parallel request sequences of:

  1. GET /petition
  2. GET /signature
  3. POST /signature
  4. GET /signature/confirm
  5. PUT /signature/confirm

This setup is configured in a Apache JMeter test file.

Think times

For a realistic scenario, variable delays are added in the test script:

  • 5 – 45 seconds between seeing the petition form & submitting the signature
  • 15 seconds – 1 minute between submitting the signature & navigating to the confirmation form
  • 1 – 5 seconds between seeing the confirmation form & submitting it

Though we can disable the delays to put the server under a constant load, which might seem as a way to “really see what it can do”, it arguably doesn’t bring any actual insights, since such a scenario will never occur in practice.  It might make sense to play a bit with the limits of the various delays, and maybe also cater for the scenarios where people do visit the petition page, but don’t sign it, or people do sign the petition, but don’t confirm their signature. All that is for maybe later; I’ve only tested with confirmed signatures, and with the delays set as above.


Essix refuses to process PUT, POST, PATCH, or DELETE requests that don’t carry a valid encrypted form token, as a protection to cross site request forgery (CSRF). The form tokens also carry the input for the rate limiting function. Since signing a petition is done without logging into an application account, CSRF is not a real risk in the case at hand. One might argue we should skip the form token tests to gain performance. On the other hand, providing the ability to bypass token testing, would introduce large possible security holes in other applications. Since a quick test proved the performance impact of token computing to be negligible, I hastily decided to keep the platform secure, and just deal with the tokens.


1. nodes

To setup a multi-node test environment for the application to run in, we start off with three 2GB ($20/month, $0.03/h) droplets on DigitalOcean:

$ export DIGITALOCEAN_ACCESS_TOKEN="945g4976gfg497456g4976g3t47634g9478gf480g408fg420f8g2408g08g4204"
$ export DIGITALOCEAN_SIZE="2gb"
$ essix nodes -d digitalocean -F -m 1 -w 2 create petities

Where ‘petities’ can be replaced with however the swarm is to be named. Oh, and if you haven’t installed Essix, it’s:

$ go get -u
2. r

Install the database cluster (two servers per node):

$ essix r -n 2 create petities
3. cert

Clone the petities repo, cd to it, and generate an SSL certificate:

$ essix cert
4. build

Build your image & push it to the Docker Hub. (Note: since the image does include your server certificate, you’d eventually want to make it private on Docker Hub, or set up a registry of your own)

$ docker login
$ essix build you 0.1
5. run

Run the application service, setting the environment variables:

$ essix -e \
-e DB_POOL_INITIAL=100 -e DB_POOL_MAX=1000 \
-e RATELIMIT=0 -e GO_ENV=test \
run you 0.1 petities
6. config

Load the email config in the database:

Schermafbeelding 2017-05-15 om 21.10.05.png
Rethinkdb’s web admin


 EmailAddress: '', PWD: '8t763w4c87tcw39',
 PortNumber: '587', SmtpServer: ''
7. scale

Restart the application to load the updated config from the database, by first scaling it to 0 replicas, then to e.g. 6 on each node:

$ essix -r 0 run you 0.1 petities
$ essix -r 18 run you 0.1 petities


$ docker-machine ssh petities-manager-1 docker service scale petities=0
$ docker-machine ssh petities-manager-1 docker service scale petities=18
8. /provision

Now browse to /provision to generate the petition record plus a number of signatures. The real Groningen one has around 200K, loading that number should take a minute or two – monitor the rate of writes on the RethinkDB Dashboard to see when it’s done.

9. load

Use Apache JMeter to open the petities.jmx file in the root of the repo:

Schermafbeelding 2017-05-15 om 21.18.04.png
The test plan in JMeter


That’s all now for the delicate details. The next part of this post will discuss the test outcomes. It might take a while, because things currently seem to point out there’s no getting around setting up a distributed load generating solution as well 🙂

Een case voor de usecase

In de context van agile development wordt voor de specificatie van functionaliteit vaak rechtstreeks gegrepen naar user stories, waardoor een agile team vaak binnen zeer korte tijd aankijkt tegen een enorme stapel van user stories, die stuk voor stuk een promise for a future conversation inhouden. Moeten we dan niet die gesprekken maar eens gaan voeren? Wat doen we met de uitkomst van die gesprekken? Hoe voorkomen we dat we dezelfde gesprekken steeds weer opnieuw gaan voeren? Vanuit die gedachtegang werpen Jim Coplien en Gertrud Bjørnvig, in hun boek Lean Architecture for Agile Software Development, een case op voor de usecase.

Usecases beschrijven wat het systeem voor de gebruikers doet, c.q. wat gebruikers met het systeem doen. Het voornaamste doel is dat je op een hanteerbare manier vastgelegd krijgt wat je met alle betrokken sleutelfiguren hebt besproken over de functionaliteit en welke beslissingen daarover zijn genomen, zodat je een basis hebt om het systeem mee op te bouwen en te testen, maar ook om het systeem mee te onderhouden/aan te passen/uit te breiden.

Een usecase gaat over een doel dat een gebruiker met het systeem wil bereiken. Om te beginnen is daarom inzicht nodig in de verschillende soorten gebruikers van het systeem. Leg voor elk type gebruiker een naam en een eenduidige omschrijving vast waar alle stakeholders zich in hebben kunnen vinden.

Naast de namen en omschrijvingen van de gebruikers is het ook goed om een treffende naam voor het systeem te kiezen en liefst ook een korte, maar heldere probleemdefinitie. De probleemdefinitie geeft heel expliciet aan wat het gat is tussen de huidige situatie en de gewenste situatie, dat het systeem moet gaan opvullen. Zo’n uitgeschreven en zichtbaar gedeelde probleemdefinitie geeft steeds veel focus voor alle betrokkenen.

Vanuit deze context (systeemnaam, probleemdefinitie, naam en omschrijving per gebruikerstype) kan je snel tot een helicopterview van het systeem komen, door voor elke type gebruiker z’n voornaamste doelen op te sommen. De genoemde doelen vormen dan de namen van usecases, want een usecase gaan over een doel dat een gebruiker met het systeem wil bereiken.

Bijvoorbeeld in het geval van de Groenland Bank, een nieuw onlinesysteem voor een fictieve bank, zou een eerste helicopterview er zo uit kunnen zien:

Recente transacties bekijken
Geld overschrijven
Rekeningafschriften afdrukken
Terugkerende betaling toevoegen
Nota betalen

Niet alleen in dit voorbeeld, maar ook in de werkelijke praktijk is het nadrukkelijk de bedoeling om de helicopterview simpel en compact te houden. Het aantal doelen per gebruiker blijft ook bij complexe systemen beperkt, omdat de complexiteit niet zozeer zit in het aantal usecases, maar in de variatie binnen de verschillende usecases. Per usecase beschrijven we namelijk om te beginnen als basis een mooiweerscenario dat later wordt aangevuld met losstaande beschrijvingen van anomalieënafwijkingen van het mooiweerscenario. Het systeem wordt complexer naarmate er meer anomalieën geïmplementeerd worden.

Voordat het mooiweerscenario wordt uitgewerkt, leggen we van een nieuwe usecase eerst de volgende zaken vast (zoals ze met alle sleutelfiguren samen besproken zijn):

  • Businessmotivatie
  • Gebruikersintentie
  • Preconditie
  • Postconditie

Bijvoorbeeld voor de Geld overschrijven usecase van de Groenland Bank:

Als onderdeel van onze strategie Stel de klant in staat z’n bankzaken thuis te regelen zien we het overschrijven van geld als een belangrijke dienst. De Rekeninghouder kan z’n rekeningen in de gaten houden en geld overschrijven naar een rekening waar het saldo te laag van wordt of waarvan de Rekeninghouder dat binnenkort verwacht. Het kan ons (de bank) de tijd besparen om brieven te sturen over te lage saldo’s en het kan ook voorkómen dat we rekeningen moeten sluiten en ze later weer heropenen. Als aanvulling zouden we ook graag zien dat de Rekeninghouder geld kan overschrijven naar rekeningen van andere Rekeninghouders – zowel binnen onze eigen bank als van en naar andere banken. Onze concurrenten bieden die diensten al aan en we kunnen daar zelf niet veel langer mee wachten.
Als Rekeninghouder wil ik geld kunnen overschrijven tussen mijn rekeningen, zodat ik ervoor kan zorgen dat ik nergens roodsta en mijn betaalpas wordt geblokkeerd.
De Rekeninghouder is ingelogd bij de Groenland Bank en een overzicht van z’n rekeningen wordt getoond op het scherm.
Het bedrag dat de Rekeninghouder heeft ingevoerd is verplaatst van de bronrekening naar de bestemmingsrekening. De twee rekeningen zijn in balans en de transactielogs zijn bijgewerkt.

Het mooiweerscenario wordt nu als volgt beschreven:

  • In een tabel met kolommen voor Stapnummers, voor de Gebruikersintentie, voor de Systeemverantwoordelijkheid, en voor Commentaar.
  • Elke stapbeschrijving begint met de gebruikersnaam/de systeemnaam.
  • De gebruikte terminologie is weloverwogen gekozen en wordt consistent toegepast. Het is aan te bevelen om een begrippenlijst bij te houden waarin de gebruikte termen worden gedefinieerd. Deze begrippen vormen feitelijk bouwstenen voor de architectuur van het systeem.
  • Eventuele toelichtingen, besluiten, open discussiepunten en nieuwe vragen worden genoteerd in de kolom Commentaar.

Bijvoorbeeld weer voor de Geld overschrijven usecase van de Groenland Bank zou het mooiweerscenario er als volgt uit kunnen zien:

Stap Gebruikersintentie Systeemverantwoordelijkheid Commentaar
1 De Rekeninghouder selecteert een bronrekening en kiest voor Overschrijven. De Groenland Bank toont de bronrekening, een lijst van bestemmingsrekeningen en een veld om het bedrag in te voeren. Moet de Rekeninghouder eerst voor Overschrijven kiezen en dan de bestemmingsrekening, of omgekeerd?

De lijst van bestemmingsrekeningen is standaard: de eigen rekeningen van de Rekeninghouder, met uitzondering van de bronrekening

2 De Rekeninghouder kiest een bestemmingsrekening, voert het bedrag in en accordeert. De Groenland Bank toont de overschrijvingsgegevens (bronrekening, bestemmingsrekening, datum, bedrag) en vraagt een wachtwoord om de overschrijving te bekrachtigen. De standaarddatum is de huidige datum.
3 De Rekeninghouder voert het wachtwoord in en accordeert de overschrijving. De Groenland Bank verplaatst geld, werkt de boeken bij en toont een transactiebewijs. Routine: Geld verplaatsen en boeken bijwerken.

Is transactiebewijs de juiste term?

Is een transactiebewijs wel nodig als de transactie tussen twee eigen rekeningen is?

Moet de Rekeninghouder het transactiebewijs kunnen printen?

In stap 3 wordt verwezen naar een routine; dat is een opeenvolging van systeemacties, die in verschillende usecases op dezelfde manier gebruikt kan worden. Voor een routine worden geen businessmotivatie of gebruikersintentie beschreven, want die volgen uit de betreffende usecase (en die verschillen dus ook voor de verschillende usecases die de routine gebruiken). Routines kennen ook geen anomalieën; het zijn eenduidige stukjes lopendebandwerk. Van elke routine worden vastgelegd:

  • Naam
  • Preconditie
  • Stappen
  • Postconditie

De definitie van de routine Geld verplaatsen en boeken bijwerken in de Groenland Bank is bijvoorbeeld als volgt:

Een geldige bronrekening en bestemmingsrekening zijn bekend, evenals het bedrag dat moet worden overgeschreven.
  1. Groenland Bank verifieert voldoende saldo;
  2. Groenland Bank werkt de rekeningen bij;
  3. Groenland Bank werkt de gegevens voor de rekeningafschriften bij.
De periodieke rekeningafschriften geven de precieze aard van de transactie weer (een overschrijving is een overschrijving – niet een combinatie van een opname en een storting)

Zoals gezegd zullen in verdere discussies over de usecase allerlei afwijkingen van het mooiweerscenario naar boven komen – de anomalieën. Het aardige is nu dat je in de beschrijving van elke anomalie kan verwijzen naar een specifieke stap in het mooiweerscenario.

De lijst van anomalieën binnen de Geld overschrijven usecase van de Groenland Bank zou er op een bepaald moment bijvoorbeeld als volgt uit kunnen zien:

Stap Ref Afsplitsende actie Commentaar
1a De Rekeninghouder voegt een tekst toe aan de transactie op de bronrekening. Hoort dit niet in het mooiweerscenario?

Wat is de standaardtekst als de Rekeninghouder geen tekst toevoegt?

1b De Rekeninghouder wil overschrijven naar een rekening van een andere klant. De Rekeninghouder moet de naam en het rekeningnummer invoeren.
1c De Rekeninghouder wil een andere rekening van de Rekeninghouder toevoegen aan de bestemmingenlijst. De Rekeninghouder kan de rekening een naam geven (verplicht?)
2a Er staat niet genoeg geld op de bronrekening om de overschrijving te doen. Toon een foutmelding en draai de transactie terug. (Wie gaat er over meldingen aan de Rekeninghouder? Definieer minimaal saldo?)
2b De Rekeninghouder voegt een tekst toe aan de transactie op de bestemmingsrekening. Hoort dit niet in het mooiweerscenario?

Wat is de standaardtekst als de Rekeninghouder geen tekst toevoegt?

2c Het bedrag voldoet niet aan de validatieregels. Validatieregels?
2d De Rekeninghouder voert een toekomstige datum in voor de overschrijving. De Groenland Bank biedt de mogelijkheid om een toekomstige datum in te voeren. De overschrijving zal plaatsvinden op die dag, volgens bankdagen. (Hoe ver in de toekomst mag de datum liggen?)
3a Incorrect wachtwoord. Blokkeren?
3b De transactie is langer bezig dan de maximaal geoorloofde tijdsduur. Mogelijke oorzaken voor langere tijd? Welke acties als dit gebeurt? Wat is de maximaal geoorloofde tijdsduur?
3c De transactie faalt Mogelijke oorzaken voor falen? Welke herstelacties? Toepasselijke meldingsteksten?
Alle De Rekeninghouder zoekt online hulp. Wie is verantwoordelijk voor online hulp?

Wanneer tijdens discussie over een usecase een anomalie boven komt drijven, is het aan te bevelen om die meteen te noteren, zodat die discussie later niet opnieuw gevoerd hoeft te worden. Niet alle beschreven anomalieën moeten per se geïmplementeerd worden – per release bepaal je welke mooiweerscenario’s van nieuwe usecases en welke anomalieën van bestaande usecases je wil toevoegen (dit is een businessbeslissing). Van elke usecase is het mooiweerscenario het eerste dat wordt geïmplementeerd, als basis voor verdere uitbreidingen. Een enkele anomalie zal in de eerste release al meteen meekomen, andere volgen in latere releases, sommige blijven steeds weer op de plank liggen, terwijl ondertussen alweer nieuwe gevallen worden toegevoegd – het aantal anomalieën per usecase is in principe onbegrensd.

Met het stabiele mooiweerscenario en de variatie in de anomalieën, focus je met de usecase de verdere discussie steeds richting geïnformeerde, weloverwogen beslissingen over altijd weer nieuwe functies en features, uitbreidingen, uitzonderingen, alternatieven, foutafhandeling en nonfunctional requirements.

Web design? Resilient?

In the early days of the world wide web, determining the layout and function of a web page on a screen was approached pretty much the same as in the case of  a page in “print”. Which is actually quite understandable, since designers tend to design with designers’ minds, using designers’ tools. Try Photoshop: when starting a new document, the first thing to do is to set its width and height. The fixed-width approach to web page design, though crippled form the start, only started to grow really problematic since the appearance of the iPhone, and later the iPad, and the myriad of alternatives – all similar, but all with very distinctive screen widths. The problem was, as Jeremy Keith very eloquently points out in his amiable (and free!) book Resilient web design, that using fixed-width elements to design a web page rendered on variable-width screens, is materially dishonest.

But now we know better. Right? Now we use HTML strictly for marking up the meaning of content, and CSS strictly for presentation. That’s materially honest. And it’s a nice separation of concernsSo we use a table element to mark up the structure of tabular data, and never for layout purposes. Right? In HTML5, we even explicitly obsoleted the align and width attributes on table elements. Good for us!

The pair of HTML + CSS are very pleasantly loosely coupled: while the HTML would contain some hooks for the CSS to cling to, exactly the same HTML content can be presented in any imaginable way by applying changes to just the CSS, while on the other hand one single CSS file can serve to style any thinkable content in HTML. Another remarkable property is that the same HTML, when for whatever reason the accompanying CSS  is crippled or lost, will still get presented in a perfectly readable way. In a not so beautiful, default way, but still entirely useful for a clear interpretation of the content at hand.

But there’s more.

Both HTML and CSS share the property of being a declarative languagemeaning they don’t instruct a computer to follow a step-by-step recipe, but just define some information (HTML: meaning, CSS: presentation) about some content. This renders them a very forgiving attitude to errors: when a browser is rendering a page, and encounters an HTML tag it doesn’t recognise, it ignores the markup, and displays the tag’s content. It doesn’t report an error, it doesn’t stop processing, it just does the default thing, keeps calm, and carries on. Same thing in CSS: unknown selectors, properties, and values are just ignored, a default style is applied, and processing continues. That behaviour is by design, and it’s tremendously powerful! It’s a huge advantage. It really is. It’s true.

Thing is, the liberal way in which HTML & CSS are parsed, enables a profoundly robust route for innovation by leveraging the ever-extending feature sets of modern web browsers. Not every user on the web has the latest and greatest browser version installed, and not every device has the topmost capabilities. Still, it’s perfectly safe to use the hottest of the new stuff in your HTML and CSS, since you can rely on any non-supporting browser to just keep calm, and carry on. Nothing will break; some get to see the full glory of your endless creativity, everyone will get the same content in one perfectly usable presentation or another. Websites don’t need to look exactly the same in every browser.

Of course we have to consider that other language of the web as well: JavaScript. It’s quite popular. JavaScript is used in many ways, but its main concern is enabling advanced interactions between the content and the user, and between different elements within the content. A major difference with HTML & CSS is that JavaScript is an imperative language, instead of a declarative one. It defines a step-by-step program that the browser should execute. If something fails, an error is thrown, and execution stops. Compared to HTML & CSS, JavaScript is very, very breakable. It enables many nice ways of interaction, but it’s safest to look at those more as enhancements, than as core functionality. There’s a lot of things that can go wrong – it’s safe and wise to use it, but you’d better not rely on it for any core functionality.

Then what is a sound approach? Three steps:

  1. Identify core functionality.
  2. Make that functionality available using the simplest possible technology.
  3. Enhance!

That’s the strategy for what’s known as progressive enhancementIt enables you to go as wild as you want on the latest and greatest hot new stuff, because you can always rely on the safety net of your basic resilient HTML+CSS design. Thoughtfully starting off with full focus on the basics, using the plain old bare open standards of the web isn’t actually setting you back to what we regarded normal in the days of Geocities – quite the contrary: it’s a great enabler of experiment and innovation. All by design. Enhancing progressively is an act of future friendliness.

If you follow the buzz, it very much seems that web development is just another word for choosing one of the popular JavaScript frameworks. The case for progressive enhancement in Resilient web design quite firmly augments to the reasons why e.g. React is a terrible idea (it’s sad!) It also adds to a solid foundation for the powerful recommendations of ROCA – resource oriented client architecture.

Enfin, you might want the book as well. It’s a good read.


Essix logo

Introducing Essix

Shortcut: create more web with Essix

$ go get -u



So when you have this prospect who’s planning to “build a web based tool” where users form communities to stand strong together in planning and selling their small produce to big fat client organisations… where do you start?

Yes, you could reach out for the nearest open source fool proof Content Management System, and fling in a Community Management plugin and a Deal Broker plugin with a proven track record, choose a theme, click it together, and run.

That’ll work brilliantly. If successful, users start building considerable parts of their business on it. That’s about the moment they call you to tell something is broken, and needs fixing. Urgently. Then you wonder where to click to fix it, and quickly call the local open source fool proof Content Management System super expert. Because that’s not you. They dive deep for an hour or two, then conclude you should call the regional proven track record Deal Broker plugin super expert. Because that’s not them.

Enfin, you get the picture: packaged things that do a lot, out of the box, without programming, just configuration, they tend to break down dramatically sooner or later, without telling you how to fix them.

So should you start from scratch then? Well… why not? The tremendous advantage would of course be that you end up with only precisely exactly what you need, while knowing every bit and byte of what it does and how it works. But yeah.

No, you can’t start from scratch for every project. Because… there’s so much that you need! So what is it? What do you really need for each and every project that makes it too much to start from scratch?

Let me speculate a bit here. What you always need is:

  • A highly available database cluster
  • A straightforward way to manage business object data life cycles
  • Transparently secure user authentication & authorisation
  • HTML templating
  • User & system error handling
  • A responsive static file server
  • Clear definition of request routes with paths & methods & handlers
  • All forms inescapably protected from Cross Site Request Forgery attacks
  • Texts and labels in multiple languages
  • Sending email
  • HTTPS with certificate generation
  • HTTP2 would be welcome
  • A short-cycled build system
  • An automated script to build computing environments
  • A no-brains way to scale out by adding computing resources
  • A declarative rate limiting capability, protecting from robots & Denial of Service attacks

Those kind of things. Am I far off? Anyway, it’s clear enough: that is too much to build from scratch – for any project. But then again, once you would have all this, what else would you need? Wouldn’t all projects then seem a bit like “well, what we basically need is some clearly defined business logic, a thought through user interface, and some consistent styling – then we hack it together in a jiffy”? Hm, so, yeah, why not build it from scratch once?


Why not? Because it’s sitting here, right under your nose. The name is Essix, pronounce s6. Essix runs an essential simple secure stable scalable stateless server. Nothing less, and certainly nothing more. It builds right on top of the very standards of the web, so you can fully and deterministically trace what’s going on in any part of it. The code is pretty neatly documented as well. No, it’s not JavaScript. Because JavaScript is getting so complex. It’s Go, because Go plays nicely, and checks things. Besides Go, it leans on Docker Swarm Mode to run things in its robust way of running things. But it knows how to do it; it won’t get in your way.


Follow the Quickstart to kick it off, and peek around in the Example to get the hang of it. Make yourself feel at home. You’re welcome.

An easy recipe for Let’s Encrypt

Obtaining a trusted TLS certificate has just become a lot easier, thanks to Let’s Encrypt. Still, it can be quite a winding path to get to where you want to end up. The following recipe eventually did it for me, and actually makes it fairly quick and simple.


There’s four things that you need for this to work:

  1. A proper domain
  2. An account with DigitalOcean
  3. A link between the two
  4. Docker

1. Domain

Yes, for a trusted certificate, you really do need an actual domain. They come cheap or expensive; I got an .nl domain through Strato for one year for €0,84. You can settle for any domain, as long as it ends up in the public DNS.

2. DigitalOcean

Though they want your credit card details, an account with DigitalOcean is free. They only charge something when you create any virtual machines, which you don’t need for this. What you do need is their domain manager, exposing an API supported by the Let’s Encrypt tools. You also need to generate an API token.

3. Link


Tell your domain provider you’re managing the domain through DigitalOcean. With Strato, it worked like this:

  1. Add a “sub domain” (e.g. under
  2. Go to the DNS settings for the sub domain.
  3. Configure the NS-record to point to these custom name server addresses (the trailing dot proved significant):


Go to DigitalOcean’s domain manager to “Add a domain”, providing your (sub) domain, and any IP address, and clicking Create Record:

DigitalOcean "Add a domain"
DigitalOcean “Add a domain”

4. Docker

Install Docker if you don’t have it.

Do the trick

We’ll use Docker to run the excellent instant xenolf/lego image, telling it (line by line) to:

  • Automatically remove the container when it exits
  • Save the results in the current directory (i.e. ./accounts & ./certificates)
  • Provide our DigitalOcean API key
  • Accept Let’s Encrypt’s Terms Of Service
  • Check with DigitalOcean’s DNS
  • Use our email address as an account name with Let’s Encrypt
  • Generate a certificate for the given domain
$ docker run \
--rm \
--volume $PWD:/.lego \
--env DO_AUTH_TOKEN=945g4976gfg497456g4976g3t47634g9478gf480g408fg420f8g2408g08g4204 \
xenolf/lego \
--accept-tos \
--dns=digitalocean \ \ \
2016/11/02 20:14:41 No key found for account Generating a curve P384 EC key.
2016/11/02 20:14:41 Saved key to /.lego/accounts/
2016/11/02 20:14:41 [INFO] acme: Registering account for
2016/11/02 20:14:42 !!!! HEADS UP !!!!
2016/11/02 20:14:42 
 Your account credentials have been saved in your Let's Encrypt
 configuration directory at "/.lego/accounts/".
 You should make a secure backup of this folder now. This
 configuration directory will also contain certificates and
 private keys obtained from Let's Encrypt so making regular
 backups of this folder is ideal.
2016/11/02 20:14:42 [INFO][] acme: Obtaining bundled SAN certificate
2016/11/02 20:14:42 [INFO][] acme: Could not find solver for: http-01
2016/11/02 20:14:42 [INFO][] acme: Could not find solver for: tls-sni-01
2016/11/02 20:14:42 [INFO][] acme: Trying to solve DNS-01
2016/11/02 20:14:43 [INFO][] Checking DNS record propagation...
2016/11/02 20:14:48 [INFO][] The server validated our request
2016/11/02 20:14:48 [INFO][] acme: Validations succeeded; requesting certificates
2016/11/02 20:14:49 [INFO] acme: Requesting issuer cert from
2016/11/02 20:14:49 [INFO][] Server responded with a certificate.
$ ls -la certificates/
total 24
drwx------ 5 wsc staff 170  3 nov 11:10 .
drwxr-xr-x 4 wsc staff 136  3 nov 11:10 ..
-rw------- 1 wsc staff 3452 3 nov 11:10
-rw------- 1 wsc staff 228  3 nov 11:10
-rw------- 1 wsc staff 1675 3 nov 11:10


There you go, magico fantastico.


  1. Lego supports quite a few other DNS providers besides DigitalOcean, so you’re not necessarily tied to them at all.
  2. The example uses bash to run the command. On Windows, I expect it’s fairly straightforward to port it to CMD or PowerShell; otherwise, try Git Bash.
  3. In Essix, this would just take:
$ export DIGITALOCEAN_ACCESS_TOKEN=945g4976gfg497456g4976g3t47634g9478gf480g408fg420f8g2408g08g4204
$ essix cert




Rethink Swarm Mode

So we need another “web site & database”, right? Of course. Always! But this time, we want it to be solid. Very solid.

So we create a stateless application server. Maybe I’ll write about that later.

So we look for a clustered database that scales easily. Very easily. Enter RethinkDB.

So we need it to run somewhere. Somewhere as in: I don’t care, as long as it’s pretty stable, and connected to Internet. Somewhere as in: on a laptop for the developers in exactly the same way as in the cloud for the end users. To minimise the unexpected. Enter Docker.

So we need a cluster of Docker things running a Rethink database together. Enter Docker Swarm Mode. Let me show you how I spin up any number of previously nonexistent machines to flock into a swarm and serve us a highly available, fault tolerant clustered database, at the touch of a button, anywhere I want it.

1. Install Docker

Docker lets you package your and others’ software into “images” that run as systems of their own, and are transferable between environments – so you can develop and test with exactly the same image on your laptop as the one that will end up serving the end users in the clouds.

Start on that developer laptop. Install Docker if it isn’t there yet – Mac Windows Linux.

By the way, if you are on Windows, you’ll need to find a way to run Bash to be able to run the commands listed in this post (as well as the scripts that we link to later on) – if you’re looking for a solution, try the Git BASH that’s included in Git for Windows.

2. Install Docker Machine

Docker Machine lets you create and manage… machines. Virtual machines to run Docker images. On your laptop, on your server, or in the cloud. It supports quite a few common cloud providers right out of the box.

But wait! If you installed Docker on your Mac or Windows PC, Machine is already there. Otherwise, read the instructions to get it.

3. Install VirtualBox

Oracle’s VirtualBox lets you create… virtual boxes. Virtual machines, that is. Docker Machine uses VirtualBox to run machines locally. On your developer laptop, for instance.

But wait! If you installed Docker on your Mac or Windows PC, VirtualBox is already there. Otherwise, download the sweetness.

4. Spin up some nodes

Open a terminal and create a local machine that will act as the “manager node” in our “development swarm”:

$ docker-machine create --driver virtualbox manager

Add a node that will act as a worker in our swarm:

$ docker-machine create --driver virtualbox worker1

And one more worker to top it off:

$ docker-machine create --driver virtualbox worker2

Now, wasn’t that easy?

5. Swarm it together

Lookup the IP address Docker Machine made up for our manager node:

$ MANAGER_IP=$(docker-machine ip manager)

Now to let there be a swarm, we use docker-machine to SSH into the manager node, and initialize the beast there. We need to feed it the IP address we found:

$ docker-machine ssh manager \
docker swarm init --advertise-addr $MANAGER_IP

It will tell us we need some token in order to get the workers to join the swarm as well. Let’s just fetch that thing once now, and keep it handy:

$ TOKEN=$(docker-machine ssh manager \
docker swarm join-token --quiet worker)

Now extend the swarm to include the two worker nodes:

$ docker-machine ssh worker1 \
docker swarm join --token $TOKEN $MANAGER_IP:2377
$ docker-machine ssh worker2 \
docker swarm join --token $TOKEN $MANAGER_IP:2377

There you go.

By the way: note that we’re using the newish Docker-native Swarm Mode here. Docker Machine provides some swarm-related options, but we don’t use those, since they’re for the “legacy” swarm feature, not for Swarm Mode.

6. Rethink all the nodes!

Now that we have this swarm of three, let’s put a network on it for our little database. We do this on the swarm’s manager node, and tell it to use the overlay driver to get it accessible swarm-wide, and call it “dbnet” – since names should make sense.

$ docker-machine ssh manager \
docker network create \
--driver overlay \

Also, we need some storage for the data files:

$ docker-machine ssh manager \
docker volume create \
--name dbdata

Now, let’s get that server running:

$ docker-machine ssh manager \
docker service create \
--name db \
--replicas 1 \
--network dbnet \
--mount src=dbdata,dst=/data \
--publish 8080:8080 \

We’re creating a “service” for it on the swarm, and we call it “db”, use our swarm-wide “dbnet” network, put its data files on the “dbdata” volume, let us reach the administrative web application on port 8080 from outside the swarm, and use the “rethinkdb” image that it’ll download from the Docker Hub. All nice and clean.

But hey, what is this “–replicas 1” sitting there? Are we starting just one instance of the server? Hardly a cluster then, right?

It’s true. The thing is: in order to form a cluster, we need to tell all subsequent servers, on starting them, to join the first one. And when we’re the first one, trying to join any other server would just fail miserably.

So let’s get some more to join the club. But first, we need some storage for those as well:

$ docker-machine ssh manager \
docker volume create \
--name db1data

Okay, now we go:

$ docker-machine ssh manager \
docker service create \
--name db1 \
--mode global \
--network dbnet \
--mount src=db1data,dst=/data \
rethinkdb \
rethinkdb --join db --bind all

So there we have our actually substantial “db1” service. Because of the “global” mode it’ll run three servers – one on each node: manager, worker1, and worker2. If we would have multiple server instances on a single node, they would clash with their respective data files on the “db1data” volume. Note that while the volume is managed on swarm level, its instances on each node are all separate, thus available exclusively to that node’s server. Should we want multiple servers per node, we could just add another global service “db2” and volume “db2data” in exactly the same way – no limits there, though I’m not really sure about the practical value of having more than one per node.

By the way, the first “rethinkdb” in the command line is the image name, the second is the command that starts the server – we need to override the default command that we relied on earlier, to get the instruction in for joining the cluster. It uses the service name “db” to reach the first server.

7. Check it out

Time to see what we have now. To have a consistent entry point for the web admin, create an SSH tunnel to it like this:

$ docker-machine ssh manager \
-fNL 8080:localhost:8080

Then, go for it:

RethinkDB Web Admin

Sir, 4 servers connected, Sir! Gotta love this, don’t you?

8. Use it

Any clients should connect to port 28015 on the “db1” service. While the “db” service will work as well, you wouldn’t want to depend on the availability of that single replica, would you?

We could publish port 28015 to access it from outside the swarm, but why not create an application service running inside of it?

For instance, in go, we could try the Hello world example of gorethink, spraying some service-worthy behaviour on it by wrapping it in a canonical http server example:

package main

import (
  r ""

func main() {

  var url = "db1:28015"

  session, err := r.Connect(r.ConnectOpts{
    Address: url,
  if err != nil {

  http.HandleFunc("/bar", func(w http.ResponseWriter, req *http.Request) {

    res, err := r.Expr("Hello from Rethink").Run(session)
    if err != nil {

    var response string
    err = res.One(&response)
    if err != nil {

    fmt.Fprintf(w, "Hello, %q 0.1\n", html.EscapeString(req.URL.Path))
    fmt.Fprintf(w, response+"\n")

  log.Fatal(http.ListenAndServe(":9090", nil))

To package that, let’s follow Kelsey Hightower’s approach for assembling a completely dependency-free binary, that can run in the tiniest of tiny images.

If you’re not into go, and don’t feel like getting into it, you can skip over the next bit, and just pull my image from the Docker Hub. Otherwise:

Install go (locally) if you haven’t got it yet.

Create a new directory “rethinkswarmmode”, with a new file “foo.go”, and paste in the go code from above.

Navigate to the “rethinkswarmmode” directory, and run the formatter:

$ go fmt

Fetch the one source dependency (the gorethink driver):

$ go get

Compile the code:

$ CGO_ENABLED=0 GOOS=linux go build -a -tags netgo -ldflags '-w' .

Now, to build a Docker image, we need a Dockerfile:

$ echo "FROM scratch" > ./Dockerfile
$ echo "ADD rethinkswarmmode rethinkswarmmode" >> ./Dockerfile
$ echo "EXPOSE 9090" >> ./Dockerfile
$ echo "ENTRYPOINT [\"/rethinkswarmmode\"]" >> ./Dockerfile

That’s right: from scratch! Like I said: no dependencies  🙂

You could build the image “remotely”, on each consecutive swarm node…

$ docker-machine ssh manager \
docker build -t yourname/rethinkswarmmode:0.1 $PWD
$ docker-machine ssh worker1 \
docker build -t yourname/rethinkswarmmode:0.1 $PWD
$ docker-machine ssh worker2 \
docker build -t yourname/rethinkswarmmode:0.1 $PWD

…or you could build it locally, then push it to a shared repository (i.e. Docker Hub). That’s much prettier, but also slower, and requires you to have an account for the repository, and being logged in ($ docker login –username yourname –email youraddress, then type your password):

$ docker build -t yourname/rethinkswarmmode:0.1 .
$ docker push yourname/rethinkswarmmode:0.1

Either way… now it’s run time! (Just replace “yourname” with “wscherphof” if you skipped the go compiling and image building)

$ docker-machine ssh manager \
docker service create \
--name rethinkswarmmode \
--replicas 6 \
--network dbnet \
--publish 9090:9090 \
$ docker-machine ssh manager -fNL 9090:localhost:9090
$ curl http://localhost:9090/bar
Hello, "/bar" 0.1
Hello from Rethink

There you go!

$ docker service ps rethinkswarmmode
ID                        NAME               IMAGE                         NODE    DESIRED STATE CURRENT STATE          ERROR
4itnyefnkfp8v10zwu2ksx9cd rethinkswarmmode.1 yourname/rethinkswarmmode:0.1 manager Running       Running 21 seconds ago
dk76qyhlowrz1niiuc4q23f2d rethinkswarmmode.2 yourname/rethinkswarmmode:0.1 worker1 Running       Running 20 seconds ago
0het5jrtldneddkludyf1ahn1 rethinkswarmmode.3 yourname/rethinkswarmmode:0.1 worker1 Running       Running 20 seconds ago
emounxbjcuzo7sfe8siwydg3z rethinkswarmmode.4 yourname/rethinkswarmmode:0.1 worker1 Running       Running 20 seconds ago
a0f6qqfw3dcof39t77w7gm850 rethinkswarmmode.5 yourname/rethinkswarmmode:0.1 worker2 Running       Running 21 seconds ago
d4iasxlxj39kqrmxwz4hv64z7 rethinkswarmmode.6 yourname/rethinkswarmmode:0.1 worker2 Running       Running 21 seconds ago

Pure satisfaction, right? Come on; admit it!

9. Cloudification time

All good and well, but it’s about time to get this whole thing to the cloud, isn’t it? There’s actually quite a few clouds that Docker Machine supports right out of the box. Let’s pick DigitalOcean. Don’t ask me why – probably because they say it’s “designed for developers”, whatever that may mean. So get an account there. It’s not going to cost you much; just remember to not only stop, but actually remove your machines if you’re not using them. To just try some things out, it won’t cost you more than 1 or 2 dollars. Your account comes with an “access token”, and we need that one to create our new machines. Keep it somewhere safe and secret.

Now, to save you from going through all of our command line fiddling again from the start, I might as well confess to you now that… it was all scripted! Find the repo on GitHub, and download, clone, or fork it.

The scripts are designed to operate on a swarm for a conceptual “environment”, e.g. “dev” for your local development laptop, “tst” for the testers, “acc” for user acceptance, and “prd” for production (the end user environment), but you’re free to choose your own names.

Running the “nodes” command with just “dev” as the environment argument, will create the nodes “dev-manager-1”, “dev-worker-1”, and “dev-worker-2”, and swarm them up together. What we’ve been so painstakingly creating above, we could recreate from the ground up, with a snap of the fingers, like this:

$ ./nodes -m 1 -w 2 create dev
$ ./rethinkdb/r create dev
$ ./go/build -p 9090 ./rethinkdb/go/rethinkswarmmode \
yourname/rethinkswarmmode:0.1 dev
$ ./app -t 9090 -r 6 rethinkswarmmode \
yourname/rethinkswarmmode:0.1 dev

Local (on VirtualBox) is the default destination – to get the nodes up in the cloud, save your DigitalOcean access token in an environment variable:

$ export DIGITALOCEAN_ACCESS_TOKEN="945g4976gfg497456g4976g3t47634g9478gf480g408fg420f8g2408g08g4204"

Now you could pull a three-node “tst” swarm up in the cloud, like this…

$ ./nodes -m 1 -w 2 -d digitalocean -F tst

…but a swarm with all nodes just sitting in the same place together, isn’t nearly the most fail-safe of all, is it? Let’s fix that. DigitalOcean has separate regions (note that while most are reported “available”, some others aren’t), enabling us to swarm around the world in 80 nodes (or 3):

Start with a clean slate:

$ ./nodes rm tst

Create “tst-manager-1” in Amsterdam:

$ export DIGITALOCEAN_REGION="ams3";
$ ./nodes -m 1 -d digitalocean -F tst

Create “tst-worker-1” in Singapore:

$ ./nodes -w 1 -d digitalocean -F tst

Create “tst-worker-2” in New York:

$ ./nodes -w 1 -d digitalocean -F tst

When done, you should see the new nodes listed as “droplets” in your DigitalOcean account.

Now we can spin up the RethinkDB cluster on the “tst” swarm:

$ ./rethink/r tst create
* removing db0...
* removing db1...
* removing dbnet...
* creating dbnet...
* creating db0data...
* creating db0...
* creating db1data...
* creating db1...
* connecting...
localhost:8081 -> tst:8080

It’ll open the RethinkDB web admin again, showing the cluster with  4 connected servers. Each swarm/environment gets its own tunnel with its own port number on your local machine.

Build a Docker image for the go application server (or skip it, and test with mine from Docker Hub, by just specifying “wscherphof” instead of “yourname” in the ./app command below – Docker knows where to find it then):

$ ./go/build -p 9090 ./rethinkdb/go/rethinkswarmmode \
* formatting source code...
* compiling...
* building image...
Sending build context to Docker daemon 5.965 MB
Step 1 : FROM scratch
Step 2 : ADD rethinkswarmmode rethinkswarmmode
 ---> 53b7d3aef48e
Removing intermediate container d664d1f2fb96
Step 3 : EXPOSE 9090
 ---> Running in 198a861bcb43
 ---> 7441c635d4ff
Removing intermediate container 198a861bcb43
Step 4 : ENTRYPOINT /rethinkswarmmode
 ---> Running in a44cb324d142
 ---> ef12312ecc18
Removing intermediate container a44cb324d142
Successfully built ef12312ecc18
* pushing image...
The push refers to a repository []
ae96e9f40d95: Pushed
0.1: digest: sha256:b474e5e6014c7f4929fb4f746f0b29948278fe33c2850a423e8da41ca721b8a3 size: 528

Lastly, run that stuff:

$ ./app -t 9090 -r 6 rethinkswarmmode \
yourname/rethinkswarmmode:0.1 tst
* creating appdata...
* starting service...
* connecting...
localhost:9091 -> tst:9090

Open your web browser at http://localhost:9091/bar, and you should find it showing that lovely little message again:

Hello, "/bar" 0.1
Hello from Rethink

Remember that droplets get billed even when turned off. So when you’re done, get rid of them:

$ ./nodes rm tst

10. But, but, but, …

…What if that precious single db replica goes down, the root of our cluster?

Well, let’s try:

$ docker-machine ssh tst-manager-1 docker service rm db0
$ curl http://localhost:9091/bar
Hello, "/bar" 0.1
Hello from Rethink
$ curl http://localhost:8081
curl: (52) Empty reply from server

So it’s not so much of a root of the cluster then, is it? The cluster keeps running without it, and the application keeps safely connected to the redundant “db1” service. But we did lose our gateway to the Rethink web admin tool.

Let’s pull it back up then:

$ docker-machine ssh tst-manager-1 \
docker service create \
--name db0 \
--replicas 1 \
--network dbnet \
--mount src=db0data,dst=/data \
--publish 8080:8080 \
$ ./rethink/r tst

And… we’re back! It’ll take a minute, or two or three, before it’s reconnected to all of the other servers, but it’ll be all figured out by itself.

…What about other cloud providers?

There’s actually quite a few that Docker Machine supports. You can use any of them, by first ensuring an account, and then just setting the proper environment variables, and pass “-d drivername” to the “nodes” command. I couldn’t login to “azure”, but have played for some time with “google” and “amazonec2”. Both proved quite a bit more complex than digitalocean; you’ll need to develop a fair amount of very specific knowledge about their security groups and network settings and stuff to get it running smoothly. I’m very interested though, to get a swarm to run on nodes that are hosted not merely in different regions, but on totally different cloud providers. Should be possible, shouldn’t it? For now, I’ll leave it as an exercise to the reader!

To CAPTHCA, or not to CAPTCHA?

Since CAPTCHAs, securitywise, apparently are to be regarded as “rate limiting only“, combined with the fact that, usabilitywise, CAPTCHAs, er… well… you know… suck, I wonder: why not do rate limiting instead of CAPTCHAs?

But then: how could that work?

Suppose: you have a page where users that forgot their password would enter their email address to receive a ‘password reset’ link. You want to avoid users getting spammed in your name by people, or bots, submitting the ‘forgot password’ form repeatedly, just for fun. So you want to rate-limit it to, say, max 1 request per hour.

The GET request returning the form that would issue the POST request triggering the email sending, could set a cookie, a header, or a hidden form field with a token identifying the request. The POST handler should then decline any request without the token (and log it as an attack). Token forgery is avoided by encrypting the tokens when issued, with a key only known to the server.

The POST handler records the client’s IP address in a database, together with the timestamp, invalidating any subsequent requests within the next hour. Requests exceeding the rate limit can get a 429 Too Many Requests response status.

To prevent token reuse, and to protect against POST requests from spoofed IP addresses, tokens include the client’s IP address, and a timestamp – every new POST request has to be preceded by processing the response to a GET request from the same address. The timestamp should at least be after that of the last recorded request. You might even have the tokens time out after like 1 minute.

These rate limiting tokens in fact seem quite similar to CSRF tokens. A big difference is that for rate limiting, we have to save data server-side about previous requests, whereas for CSRF prevention, all the verification data can be contained in the POST request itself. Which is quite a shame in some way, since I started out all this by thinking I could effectively store (encrypted) CAPTCHA solutions client-side, instead of in the database… but leaving CAPTCHAs altogether sounds equally appealing, if not more!

Right-align form elements with CSS

If you’re anywhere near as old as I am, you would think “table” when you needed to align things on a web page. Especially if things are to be right-aligned. At the same time you know that these days, layout activities should be carried out through CSS, and tables have turned into sorts of evil. At least, the “align” attribute of td elements and alike is officially obsolete now – “Use CSS instead” it states, bluntly.

So… fiddle time! Look here:

Schermafbeelding 2015-06-17 om 10.27.17

The Table version looks quite allright, but is full of the obsoleted align=”right” that we’re aiming to avoid. The Plain HTML version is the naked skeleton that, while functionally intact, could use some CSS magic in browsers that support it. The CSS Justify version uses “justify-content: space-between;”, while the CSS Float version uses “float: right;”. Both CSS versions look very similar, but that’s changing a bit when we start narrowing down the viewport:

Schermafbeelding 2015-06-17 om 10.36.10

The Justify version is behaving more like the Table version there, while the Float version acts more like the Plain HTML. Let’s see what happens if we squeeze it a bit more:

Schermafbeelding 2015-06-17 om 10.40.20

Oops. The Table version is definitely in trouble here. Good thing we’re replacing it 🙂 Good old Plain HTML is holding up strong, doing better now than the Justify version. Float can’t be bothered, so it seems. Now, push the limits:

Schermafbeelding 2015-06-17 om 10.52.37

Why did we ever do Tables; who invented that?! I mean, the Plain HTML beats the hell out of it. The CSS Float version appears to be ticking all the boxes. Yay.

So what are the secrets? First of all, obviously, there’s a “float: right;” on the input elements. An important thing is the “display: flex;” on the parent element of the form (the main element in this case), since otherwise, the inputs would float all the way to the end:

Schermafbeelding 2015-06-17 om 11.06.42

The other main trick is to keep the floating pieces “in their own row”, through “display: flex; flex-direction: column;” on the form element, to prevent any messiness like this:

Schermafbeelding 2015-06-17 om 11.13.22

Lastly, the input elements are kept in view through “max-width: 95%;”, since we don’t want to see anything like this:

Schermafbeelding 2015-06-17 om 11.17.13

Redirect to Log In page

So when a web site gets a request for a page that is only available to logged in users, it seems to make sense to redirect the browser to the log in page, and once logged in, redirect back to the originally requested page.



Redirect is a 30x http status code, while the fact that you have to be logged in for that page should be communicated through 403 “Forbidden”. For non-human clients, the 403 response is far more useful (if not indispensable) than a human-friendly log-in-page-instead. That’s even true for the application’s very own front-end, if it issued the request through Ajax or equivalent.


Do serve a 403 response. You owe it to the protocol. Happy machine clients all over the place. Ease the mere human users by including a meta refresh. Like this:

w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.Write([]byte(`<!DOCTYPE html>
<meta charset="utf-8">
<meta http-equiv="refresh" content="0; url=` + config.LogInPath + `">
<a id="location" href="` + config.LogInPath + `">Log in</a>

So if a user typed the url to the protected page in the browser, the browser would load the log in page immediately on receiving the 403 response, while if the user is on a fancy Ajax site that tried to fetch some data under the hood, the Ajax code gets the proper semantics: 403 means we didn’t get in; let’s see what we will do about that.