Brian Romanko
Senior Engineering Manager at Meta
Passionate about software, design, and building high-performing teams

How I Work: Input Devices

I struggled for some time with intense wrist pain. I suspected this was due to poor ergonomics while computing. Over the years I iterated through a variety of input devices (Microsoft Sculpt, Trackman Marble Wheel, Magic Trackpad) with limited success. My current setup has finally alleviated my pain. I can use the computer for a full work day plus hobby project time — all pain-free. The key to my setup is an Ergodox EZ keyboard and a handful of software tricks.

Ergodox EZ Keyboard

The completely split layout is what drew me to the Ergodox. It allows for your hands to be shoulder width apart. It also allows you to rotate the halves to whatever angle is most comfortable for your wrists. The below photo has a slight counter-clockwise rotation. I find the most comfort rotating the two halves slightly inward toward each other.

bromanko's desk setup

This keyboard was an intimidating purchase. First off, half the keycaps are blank. I thought I was a decent touch-typist but navigating a new keyboard without keycaps was daunting. More so the case when the key placement is novel. Also, there’s the price. It’s over $300 to take the dive. That’s a lot of money for an experiment. I wonder whether the high price gave me more motivation to learn. If the keyboard was $50 perhaps I would have given up after a few hours.

Despite being motivated to use the Ergodox it took me three weeks to get productive. I remember bringing it to work the first day, plugging it in and suffering to type a single email. I lasted about 30 minutes before giving up. I resorted to using it only for hobby work in the evenings and weekends. At about three weeks I found myself stumbling over keys at work and realized I was competent enough to switch fully.

Ergodox Features

The Ergodox Firmware, QMK, is very powerful. I’ve always been a keyboard-centric computer user. I prefer to keep my hands on the home row and have tried to avoid using the mouse. With the Ergodox I’ve finally been able to ditch any mouse/trackpad peripheral. I can do everything from the keyboard. In addition, I’ve been able to introduce a variety of shortcuts to increase my productivity.

Here’s my Ergodox layout. I’ll highlight a few of the biggest life-changers:


By holding a key, I can enable an overlay layer on the keymap. This is similar to how holding the shift key on a standard keyboard toggles keys to their uppercase equivalent. I’ve got three layers configured which allows me significantly more operations than the number of physical keys on the keyboard.

My base layer is used for the typical keyboard keys. Things like letters, numbers and modifiers.

My second layer is used to bring commonly typed special characters closer to my home row. Commonly used symbols (ex. (, ), $, #) are two rows lower than the number row.

My third layer is used for mouse and browser control. I can toggle this layer and use the e, s, d, f keys to move my mouse cursor. Thumb keys are used for clicking and manipulating the “scroll wheel.” While it’s not as precise as a mouse, it is more convenient and ergonomic. This is the critical feature that let me ditch my pointing device altogether.

Modifier Keys

Another feature is the ability to create momentary modifier keys. This allows the behavior of a key to change when it’s pressed versus when it is held. I’ve created several modifiers to allow for easy access to the CTRL and ESC keys.

  • Hold my z or / key to trigger CTRL
  • Tap the typical Caps Lock key to send an ESC
  • Press a thumb key on Layer 2 to open the macOS character picker for easy access to emoji
  • Press a thumb key on Layer 2 to send the screenshot chord to take screenshots without awkward contortions

Ergodox Learnings

I learned a few things in the course of using the Ergodox.

I didn’t realize how often I was opting to hold a modifier key and press a second key on the same half of the keyboard. This is not comfortable. You should be opting to hold a modifier key with one hand and typing a second key with the other. The comfort increase is noticeable but it’s been a difficult habit to break. Some folks go so far as to disable the ability to do this via Karabiner. I haven’t gone to that length but have become more mindful of the habit and look to create a parity of modifiers on either side of the keyboard.

I wish I had ordered the backlit version of the keyboard. Not because I do much typing in the dark. Rather, I would like a more obvious indication when a layer is activated. The non-backlit version has a single LED to indicate the activation of a layer. This is helpful but hard to catch out of the corner of your eye. If the entire keyboard was glowing a different color it would be immediately obvious if I accidentally toggled a layer.

Software Enabling my Keyboard-Only World

The Ergodox alone isn’t the only thing I’ve done to go mouse-free. Several tools have become indispensable.

macOS Configuration

The starting point for me was tweaking some settings in macOS for greater productivity.

  • In Keyboard settings, enable “Use keyboard navigation to move focus between controls”. This allows tabbing through UI elements in dialogs and other screens.
  • In Accessibility settings, enable “Reduce Motion”. This switches to lighter weight animations which run slightly faster than the defaults. I wish there was a way to speed the transitions up further but haven’t found anything.


Alfred is such a great utility. It’s like Google’s Omnibar for your entire OS. I’ve got a global hotkey of Hyper+Space configured to open it. From Alfred I can type all sorts of commands to facilitate common interactions. Some of my favorites include:

  • Type the first few letters of an application name to open it or bring it to the foreground.
  • Type mathematical expressions to quickly compute values.
  • Type keywords to perform actions like muting volume, locking the screen, or emptying the trash.

Karabiner Elements

Karabiner Elements is a utility to customize keyboard commands on your computer. I’m not a Karabiner power user by any means. However, there are two customizations that I can no longer live without.

I’ve reconfigured my Caps Lock key to act as a Hyper key. If you haven’t heard of a Hyper key, it’s a key that emits the combination of CTRL + Shift + + . This is a combination of characters so difficult to type that it’s unlikely to conflict with other shortcuts. It enables Hyper+x shortcuts. The A Modern Space Cadet article is the source of inspiration for this technique.

The Ergodox is capable of performing this Hyper remapping. I do it in Karabiner because I want consistent behavior from my Macbook keyboard. I haven’t notice any downside in doing this via Karabiner vs native on the Ergodox.

With Hyper enabled, I have now created a series of Application shortcut keys which can be triggered via Hyper+key. For instance, pressing Hyper+U will switch to Firefox. Hyper+J will switch to my terminal. This is significantly faster than using +TAB to navigate between applications.


I like to focus on one application at a time. This restricts the number of things that can pull my attention away from the task at hand. I needed a quick way to expand the size of an application’s window to cover the entire screen. macOS does provide Full Screen support, however it forces the window to run in a separate space. Navigating between spaces has an animated transition which extends the amount of time it takes for the next app to appear. I want that transition to be unnoticeable.

Moom is a window manager that provides keyboard shortcuts for common window sizes. I can trigger Moom for the active window with Hyper+Up. Then I can press Space to make the window take the full screen, or an arrow key to take half of the screen snapped to the edge indicated by the arrow. This is useful for cases where I’m referring to material in one window and taking notes in the other. Right now I have Firefox filling the left half of my screen and Bear on the right.


Unfortunately, web browsing is difficult to do via keyboard. Tabbing through hundreds of UI elements is exhausting. Using a keyboard-driven mouse is also annoying. Vimium has proven to be the accessibility tool that makes using a browser via keyboard possible. It is a browser extension that provides Vim style keyboard control to your browser. You can

  • Scroll the page contents using j and k
  • Switch tabs with J and K
  • Navigate back and forward with H and L
  • Press the f key to trigger keyboard shortcuts for all in-view clickable elements.

That last point is a game changer.

Vimium in action

I can click on any element on the page by pressing f and then pressing the series of keys in the yellow overlay corresponding to the link I am interested in. This makes pretty much any UI element keyboard accessible with only a handful of keypresses. Folks that have used Vim’s EasyMotion project will find it immediately recognizable.

I continually tweak this setup and predict this post won't reflect reality a year from now. Sounds like most of technology.

dot-slash-go: Simple Project ./go Scripts

I 💛 ./go scripts.

If you aren’t familiar with them, give Pete Hodgson’s overview posts (1, 2) a read.

I make it a point to include ./go scripts in every project I work on. A ./go script will significantly increase new developer productivity. Any open source project looking to lower their barrier of entry should adopt one.

Since I value them so much, I created dot-slash-go, an extendible, friendly framework for project ./go scripts. It enables you to create a better developer experience with less effort.


./go scripts tend toward a general implementation. They define a set of commands as functions. Passing no arguments displays help output enumerating the commands and their usage. This boilerplate is tedious to create for every new repository.

The ./go script user experience is important. Working with multiple teams I’ve encountered a variety of usability. Some teams do an excellent job outputting helpful command usage information. Others stop at surfacing the command and leave you guessing at what they do or what arguments are supported.

I created dot-slash-go to eliminate the boilerplate and encourage improved usability. Creating and documenting commands is so simple you’ve got no excuse to skip it!


Installation is a breeze. Navigate to your project root and run the following command:

bash -c "$(curl -sS"

A guided process will ask you a series of questions to create a ./go shell script and a .go folder to store metadata and commands.

Installing dot-slash-go Installing dot-slash-go

Re-running the install script is safe. It will update ./go to the latest version and will not delete any of your customizations.

Creating Commands

Creating new commands is also simple. Just use the included command command.

Creating Commands Creating a dot-slash-go command

The new build command, contextual help and usage information can be modified by editing build, and build.usage. 🎉

Give dot-slash-go a shot. Let me know what you think, open an issue, or improve it via a pull-request. Enjoy!

Rediscovering .NET

With the recent release of Visual Studio for Mac and Jetbrains Rider I’ve gotten the itch to explore the current state of the .NET ecosystem. Microsoft has made some bold strides in cross-platform compatibility and I was curious about the development experience.

I was a .NET developer from the betas of Framework v1 to .NET 4. At that point I transitioned to MacOS (at the time OSX) and a variety of non-Microsoft languages and platforms. The switch broadened my horizons while simultaneously making me really appreciate Microsoft’s efforts.

After nearly 10 years away from .NET I decided to create a small project and take some notes of my experience. I’m sure this post will be out of date rapidly.

Initial Decisions

  • I selected C# as my language since I’ve got the most experience with it. F# was calling but it was a distraction best saved for a later adventure.
  • As a strongly-typed, compiled language a capable IDE is imperative. Jetbrains won my heart with Resharper so I opted to use Rider rather than Visual Studio for Mac or VS Code.
  • Local development would be done via Docker containers. I’ve stopped installing development frameworks directly on my local machine. Instead I prefer Docker Compose and shell scripts for automating my local development experience. I take comfort in the ability to quickly get developing whenever I switch machines.

Solution Files

This seems to be a transition time with .NET Core. When I last worked with .NET you managed your code with Solution files (.sln) and Project files (.csproj). When I created a new project in Rider it produced both a project.json file and .sln/xproj files. The .NET Core documentation all revolves around global.json and project.json. So, I deleted the .sln and .xproj files. This made Rider very unhappy.

It turns out that Rider requires the .sln and .xproj files. It’s confusing to have two sets of files that appear to have the same purpose. Upon further research, I learned that project.json is being phased out completely. So the world is shifting back to MSBuild and csproj files. That’s fairly disheartening as I remember a lot of pain with merge conflicts and GUID wrangling in Project files. I hope the improvements they are making reduce the old pains.

Until Rider adds support for csproj files I’m stuck with both.

Unit Testing

XUnit appears to be the preferred unit testing framework. That’s great since it’s what I was last using. The dotnet test command works well enough. You can use glob patterns to run tests across multiple projects or assemblies: dotnet test test/**

Rider’s test runner also works well. It’s capable of both executing and debugging single tests or full suites. I can’t find a way to do file system watching and have it re-execute tests on save. Which brings me to the next point…

File System Watching

The .NET Tools include a watch command to listen for file system changes and re-run your tests or re-build your app. Unfortunately, the command is limited to a single project at a time. If you break your solution up into multiple projects you can’t issue a single command to watch for changes in all of the tests or source files.

I asked about this in the .NET Tools repo. The recommendation was to use MSBuild rather than dotnet test. That’s good advice considering the switch back to MSBuild. However, ider’s lack of support means that I can’t yet take advantage.

Package Management

NuGet was in its infancy when I left the .NET world. My how things have changed. It now has a robust ecosystem of packages. It’s especially great to see Microsoft publishing their assemblies via NuGet.

I was thrown off by the package installation process. My expectation was that packages would default to a project-local install rather than globally in $HOME. I attempted to force packages to install locally but the best I could do was one local packages folder per project. This duplicates a lot of packages and is hard to manage with Docker volumes. There are several open issues related to this. I scrapped this approach and am installing globally. I can’t help but wonder if this will cause pain in the future.

I was also thrown off by dotnet restore vs using NuGet manually. I probably shouldn’t have installed the NuGet binary on my machine. I’ve removed it and only use the dotnet CLI.

I’m surprised that there is no command line to add the latest version of a package to your project. Something equivalent to npm install --save or yarn add. My process involves finding the package on and then hand-editing project.json. Awkward.


It looks like the current state of things is to use StyleCop Analyzers. However the wiki page doesn’t leave me particularly excited. Coala has CSharpLintBear which uses mcs. I could try out Coala (but Python 3…) or running mcs myself. More work than I’m willing to invest right now.


Getting everything running in Docker was very easy. I was pleasantly surprised. I could even easily mount packages in a volume container like I would do with NodeJS modules.


version: "2"
      context: .
      - .:/app
      - packages:/root/.nuget/packages
      - ASPNETCORE_URLS=http://+:5000
      - 5000:5000


FROM microsoft/dotnet:1.1.0-sdk-projectjson


ADD docker/ /usr/bin/entrypoint

ENTRYPOINT ["/usr/bin/entrypoint"]
CMD ["bash"]


set -e

cd /app

dotnet restore

exec "$@"


I was interested in both Akka.Net and Orleans. Neither of them support .NET Core yet. 😔 Akka.Net has a branch for .NET Core, though. That’s a promising sign. Both projects also have Github issues listing the TODO items necessary for support. I’ve signed up for notifications on each of them.

Current Conclusion

I am optimistic about the future of .NET Core and non-Windows .NET development. These are definitely the very early days. There are some big changes looming that will cause quite a bit of churn to projects. I expect that things will feel more settled in 6–12 months.

Compared to frameworks in use with Ruby, Python and Node.js I find .NET’s patterns and practices to be more mature and consistent. Every non-functional concern I reviewed (ex. logging, monitoring, authorization) was well-considered and robust.

Overall 👍🏾

Secure Access Tokens with AWS and Single Sign-On

At Earnest, we’re big fans of single sign-on (SSO). SSO is great because it provides a single set of authentication credentials to access multiple services. Administrators can easily assign (and take away) access to services and enhance security by requiring multi factor authentication challenges for services that don’t have such. If it’s a service someone at Earnest uses, we want it covered via SSO.

We’re also avid users of Amazon Web Services. AWS provides a SAML 2.0 identity system that ties in nicely with our SSO needs. It works as expected for the web console — allowing our team to log in directly from their SSO dashboard without a second set of credentials.

However, folks on our team often find themselves needing to do more than just access AWS via the web console. Tools like the AWS CLI, Terraform or our own applications need to authenticate as well. Traditional IAM Users have the ability to generate access tokens for these purposes. Unfortunately, SAML-based SSO logins are done via Roles — and you can’t generate access tokens for a Role.

We found ourselves with a question. How can we combine the benefits of SAML-based SSO with the need for access tokens? Amazon’s answer is the AWS Security Token Service.

The Security Token Service allows you to authenticate via a SAML provider and request a short-lived access token that can be used wherever you might typically use an IAM access token. The security benefits here are great.

  • STS tokens are only valid for a maximum of one hour. This reduces surface area in cases where a key is compromised.
  • Authentication and authorization are performed using your SAML identity provider and provisioned roles. You get all the same provisioning/de-provisioning benefits.
  • Support for multi-factor authentication challenges.

All we needed to do was integrate our SAML-based SSO provider (Okta) with the AWS API. Amazon provides a few examples of this online, but do to technical challenges neither worked properly with Okta. So, we expanded the general idea to support Okta (with multi-factor authentication via TOTP).

The result is a user-friendly CLI for authenticating, generating an STS access token, and updating your local environment within seconds. It’s a big security enabler.

Generating AWS STS tokens via Okta SSO Generating AWS STS tokens via Okta SSO

How it Works

The process of authenticating with Okta (and many SAML SSO providers) is only possible via form-based authentication. We’re using headless browser automation (via the excellent Nightmare) to emulate a form-based sign-on.

  1. Prompt user for SSO-provider username and password
  2. Use a headless browser to navigate to the login page and submit the credentials
  3. Prompt for a TOTP token
  4. Use the headless browser to submit the TOTP token
  5. Parse the response from Amazon to extract the SAML assertion
  6. Present accessible roles to the user (if more than one) and allow them to select the role to assume
  7. Use the STS API to assume the role
  8. Save the token information to the AWS credentials file

We’ve open sourced our token generator. It supports assuming an AWS role and will automatically update your AWS credentials file with the new credentials. If even assumes roles across multiple AWS accounts if this is something your organization does.

At the moment, authentication is only implemented for Okta. But, adding support for other SSO providers should be straightforward. Please submit a pull request if you add support for your own.

How Earnest Engineers Make Decisions

I just rewrote our interest rate calculator. It distributes calculations across a cluster of servers. The codebase is 10 times larger but it’s sooo fast! Could I get a code review? — Scotty

This is a fictitious email out of my bad dreams. (Yes, I dream of emails. I prefer that to dreams about debugging.) It’s not that I don’t love performance improvements or distributed systems — both are important and have their place.

But in my dream Scotty created a complex new system that will be harder to reason with and learn. I wish he hadn’t spent time on it. Good for me that this is just an example, and Scotty didn’t actually build anything like this. That’s because Scotty, and every engineer at Earnest, have a shared understanding of how we make pragmatic decisions.

Software engineers are inundated with decisions at every step of the product development process. When we were a small team I could participate in all of the important ones. Unfortunately, I don’t scale particularly well (horizontally or vertically).

As the engineering team has grown from 10 to 30 members, I’ve looked for ways to enable autonomous decision making that leads to investing our time and energy in the right things.

We’ve developed a simple framework for investment decisions based on a memorable acronym: PASSMADE. It’s a term popularized by Microsoft for their Solutions Developer certification, and is a checklist of architectural concerns to consider when building software. It’s a helpful reminder of the important non-functional concerns that must be considered in software development.

PASSMADE stands for:









With unlimited resources, we’d invest equally — and heavily — in all of these. In the real world, we make tradeoffs. At Earnest, three principles are more important than the others: Security, Maintainability and Availability. We don’t avoid the other concerns. Having performant systems and accessible products is important. We merely prioritize these principles when making decisions. Here’s why:

Security: Earnest is a financial technology company using tremendous amounts of data to evaluate our clients’ financial responsibility. The privacy and security of this data is of critical importance. The lifetime relationship with our clients is built upon the trust that we will keep their data safe.

Maintainability: We’re building a company intended to last generations. With a time horizon that long there will be plenty of changes to our software, services and products. The flexibility to adapt to these changes is a competitive advantage enabling our long term success.

Availability: We’re a technology company and it’s 2015. We don’t keep banker’s hours. Our clients expect 24/7 access to services and a feature set that enables them to self-serve.

By selecting and communicating which specific principles are most important we enable everyone on the team to make similar pragmatic decisions.

That means when Connie, for example, is discussing next week’s work with her team they will decide what projects to tackle based on this framework. Should they invest in a better image compression system to make our pages load faster? Or, should they simplify the page build process to have less moving parts? Our investment preference for maintainability over performance makes the path clearer.

Prioritizing our architectural concerns has been a positive enabler for our team. The framework is embedded within our software design process allowing us to scale consistent decision making while tripling the size of the team. Unfortunately, there is one decision PASSMADE can’t help make for you. You’re on your own with naming things.

If you also like secure, maintainable and available software — or distributed interest rate calculators — give us a look, we’re hiring.

New to Earnest? Earnest is a technology company using cutting-edge data science, smarter design, and software automation to rebuild financial services. Founded on the belief that financially responsible people deserve better options and access to credit, Earnest’s lending products are built for a new generation seeking to reach life’s milestones. Using a unique data-driven underwriting process, Earnest understands every applicant’s full financial story to offer the lowest possible rates and radically flexible loan options for living life.

Meeting Earnest

One of the most difficult things about choosing where you work is getting a complete picture of what it‘s really like there. What are the people like? What are their values? What are the types of challenges they get to work on day to day? There’s only so much that a career page and job description can expose. In the past, I’ve learned the most about companies from talking to the people that work there.

If you’ve ever been curious about Earnest — especially the Data or Engineering teams — you’ve now got an ongoing chance to learn more by talking to me. You may already know that we’re building a next-generation bank leveraging technology and data sciences. Now you can get answers to all of your other questions.

Starting this Wednesday, (March 18, 2015) every week I’ll be working from a different San Francisco coffee shop from 8am until 10am. The complete schedule is:

  • Wednesday March 18, 2015 8am — 10am: Specialty’s at 1 Post Street right next to Montgomery station
  • Thursday March 26, 2015 8am — 10am: The Creamery in SOMA
  • Tuesday March 31, 2015 8am — 10am: Starbucks at Bryant and Mariposa in Mission/Potrero
  • Thursday April 9, 2015 8am — 10am: The Creamery in SOMA

If you’d like to stop by and chat about anything — from engineering practices at Earnest to how we think about using data to model risk feel free to stop by. It’s a no-pressure, no-sales environment where I’d be happy to answer any questions you have and give you a sense for what we’re all about.

You can find out exactly where I’ll be by checking the Earnest Twitter account. (Or you can message me directly.) We’ll get the information out a few days in advance.

I hope you’ll stop by and say hello.

Using Docker to Test Production SSL Certificates

Whenever I get a shiny new SSL certificate for a production hostname I can’t help but feel some anxiety. Does the certificate have the proper intermediate chain? Does the private key match the certificate? Are the SANs correct?

With Google’s deprecation of SHA1 certificates I have several services that need to have certificates re-issued and replaced. This felt like a good time to setup a small process I could use to test these certificates prior to putting them on production.

First, I created a simple testing ground for my certificates and apps. A root folder containing sites-enabled and certs subfolders.

Next I placed my certificate chain files and private keys in the certs folder. In the sites-enabled folder I configured SSL servers for each of the certificates I was trying to test.

Here’s an example that runs http and https listeners and redirects all traffic to the https server.

server {
 listen 80;

 location / {
   rewrite ^$request_uri? permanent;

server {
 listen 443;

 ssl on;
 ssl_certificate /etc/nginx/certs/;
 ssl_certificate_key /etc/nginx/certs/;
 ssl_client_certificate /etc/nginx/certs/;

With this configuration in place, I pulled down an nginx docker image.

docker pull dockerfile/nginx

Now I was ready to spawn a docker container referring to the configuration files:

docker run -i -t —rm -p 80:80 -p 443:443 -v /Users/brian/projects/ssl-test/site-enabled/:/etc/nginx/sites-enabled -v /Users/brian/projects/ssl-test/certs/:/etc/nginx/certs dockerfile/nginx nginx

The final piece is to test that the new certificate is working. The easiest solution was to edit my hosts file to resolve and to the running container. Since I’m on OSX, this will be the IP of my boot2docker VM.

# Host Database
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
## localhost broadcasthost
::1 localhost
fe80::1%lo0 localhost boot2docker

Opening a browser and pointing to will now resolve to my boot2docker VM which maps ports 80 and 443 to the running nginx container with the new certificates in place. I can confirm that the certificate chains are correct, the SANs are working properly all prior to deploying these certificates on production. Peace of mind acquired.

Elegant Node.js Web Services: Pipelines

Node.js Web Service Functionality

Out of the box, both Restify and Express treat HTTP request/response calls similarly. They create the purest JavaScript representation of their underlying HTTP counterparts. A request comes in, one or more JavaScript functions execute and a response is send to the client.

A well architected Node.js web service/app needs to execute a host of functionality on each request:

  • Request Validation
    Ensure the request conforms to the contract specified by your server. Examples include Accept header parsing, CORS/JSONP validation, or throttling.

  • Logging
    Attach logging capability to a request and emit request information to attached loggers.

  • Request Transformation
    Clean up a web request and convert raw objects to something more usable later in the pipeline. Examples include query string and body body parsing.

  • Authentication
    Read request authentication information and match the data against an authentication data store. Accept or reject credentials.

  • Authorization
    Verify that the authenticated user (or lack thereof) is allowed to access the resource requested.

  • Business Logic
    Perform the functionality required by the specific request. Examples include reading information from a data store or performing an operation.

  • Responding
    Packaging appropriate data and transforming it into an acceptable format that conforms the the contract specified by your server (xml, json, csv).

  • Auditing
    Logging response information and performing any post-request hooks. This is also a great place to apply developer safeguard behavior in non-production environments.

That’s a lot of functionality! However, breaking each behavior down into a function and executing them as a pipeline leads to an elegant separation of concerns. (I’ve had the misfortune of working on a codebase where these features were repeated in each method call. That was a maintenance nightmare.)

An Example

Here’s a walkthrough of a boilerplate Restify web service that provides all of the above behavior. Each of the concerns are separated and managed as discrete pipeline components. (Much of the implementation is pseudocode. It’s intended to show how the pieces fit, not provide a complete implementation.)


The first method by which I attach functionality to our request pipeline is via middleware. Middleware functions are executed once per request in the order they are attached. In both Express and Restify, these middleware are added via the use method. Both frameworks come packaged with common middleware that handle several of the aforementioned concerns. In application specific cases you can easily provide your own middleware functions to accomplish common behavior.

Out of the Box Middleware

In the Restify example, I leverage several provided middleware to handle most of the basic request concerns.

// Request validation

// Logging

// Request transformation

Custom Middleware

Authentication is application-specific in implementation. However, it is still functionality that must be performed on every request. It’s easy enough to create custom middleware to handle concerns such as this in a consistent manner.

// Authentication

I also leverage custom middleware to attach convenience methods to the request and response objects. For instance, I created Jiggler, a framework for customizing the serialization of model objects for REST responses. A custom middleware is added to reduce Jiggler transformation and response to a single line of code.

// Some convenience methods for transforming response objects

Route Handlers

Once common behavior is added to the pipeline, we can concentrate on the functionality unique to individual endpoints. Express and Restify provide route handler functions to associate a function of code with an HTTP endpoint URL. Here’s a typical implementation.

server.get('/', function (req, res, next) {
    version: '0.1',
  return next()

An often overlooked feature of these methods is the ability for each route registration to instead be passed an array of functions to be executed in order. That’s right — a pipeline embedded directly in each route. Here I attach functionality that is common in behavior, yet is dependent on local arguments. Typical use cases include validating request parameters or loading a model object from a URL key prior to execution of core route logic.

// Route with a pipeline of methods
// The first function will ensure required params are passed
// The second function performs our actual business logic
server.get('/tasks', [
  function (req, res, next) {
      tasks: [
          name: 'Get groceries',
          status: 'Not done',
          name: 'Walk the cat',
          status: 'Not done',
    return next()

Post-Route Middleware

Finally, some behavior needs to execute on every request yet should occur after the unique request business logic has completed. Some examples include audit loggers and developer safeguards. This functionality can be added to our pipeline in an after event handler or by adding the middleware via use after all routes have been defined. I prefer the former because it is more explicit.

if (CONFIG.server.auditLog) {
      log: new Logger(CONFIG.server.auditLog),

Developer safeguard middleware is a great way to protect yourself and other developers from easily caught mistakes. For instance, on one project we had a problem with slow execution time — particularly slow database queries. I added a post-route middleware to detect execution times above a certain threshold and then return a server error if the threshold was exceeded (in NODE_ENV=development only — just in case). This forced developers to keep performance in mind while developing.

Closing Thoughts

I recommend reading through the complete example codebase to get a sense for the patterns in practice. Node.js is a great platform for building complex web services. However,the code complexity that can arise requires a bit of diligence when crafting solutions. Mind the pipeline and keep your concerns separate. Future you will thank you for it when he or she isn’t neck deep in callbacks.