Simple Engineering

Hoogy Engineering Blog


Testing NodeJS Applications is a compilation of patterns and hacks to test large scale nodejs applications. It evolved from personal notes from various projects.

Without loosing focusing on testing –– this book explores also best practices to write, deploy and maintain quality code.

This article is going heavy editing. More content are going to be added, removed or refined. If you have any questions or problems in your day to day work, and you need a help on that front, tweet me the problem @murindwaz, If I am not able to help, I will find someone who can!


Ideally, tests should just work. That is not always the case. Things just break.

As a result, most of time, developers spend more time on fixing testing code than they spend fixing bugs, refactoring or adding functionality. The frustration that follows is one of many reasons programmers feel discouraged, skip writing or drop automated tests all together.

Main focus will hence be providing tips to spend less time fixing testing code, and more time refining features crushing more bugs.


In beginning, there were no intention to write yet another TDD book. Over the years, my personnal notes grew out of control. I just felt a need to add better structure, if those notes were ever going to serve a technical purpose while keeping my sanity in check.

The following are other motivations that kept me going:

  • Opinionated JavaScript testing resources that barely scratch the surface are good to just get started. Mature projects require deeper knowledge.
  • Developers always do the heavy lifting. When best tools becomes hard to come by, developers simply make them: that is the beauty open source spirit.
  • There are quite a number of nodejs/expressjs Integration Testing documentations. Few are dedicated to edge cases –– this book is all about those forgotten cases.
  • Digging internets for same issue, time after time, and expect a different outcome is mother of all craziness. The effort in this book is to build atop those smaller findings into one single document where I can return to for my daily tasks.
  • Few are resources that address complexity that comes with large scale NodeJS/express applications. In fact an express application(server) may also start Cronjob, coupled with WebSocket for real time application features, or serve a Stream of content from various Databases or third party sources. This resource introduce some simplicity into a complex system.

Newbie and veteran alike, automating JavaScript tests tends to be rather intimidating. The good news is “practice makes perfect”, just like in painting. Constantly honing your craft with small daily improvements will yield tangible progress over the time.

This book touches common testing use cases, and goes deeper into more advanced edge cases. For instance, testing Model dependents without spinning up an actual database instance. Exploring techniques used to test asynchronous code, or techniques used to avoid read/write to file system. We will have a look at common patterns used while testing Streams. On third party libraries front – we will explore ways to test Services without hitting remote REST or WebSocket endpoints.


The TDD way stipulates to write failing tests first, followed by corrective code. And so forth. Whereas that makes sense, in real word the order in which you approach software design doesn't match that order. Especially when working on legacy code. However, this book follows, whenever possible, the following development cycle:

  Code Sample > Challenges > Modularization > Refactoring > Test.

Those already familiar with the TDD way find this approach odd. And their worries makes sense. But there is a reason that makes this approach working in current context.

First, It is possible that the reader is dealing with a legacy codebase, that lack good test coverage. It would be naive to delete all the codebase just to make sure the developer starts by writing some failing test first, then some code. There is already something and we want to make that something testable. that is the whole premise of this book.

Second, in most large scale apps, unless started off the good footing, so many people have access to modify the code. Which means reading both the code and accompanying tests take precedence. We assume the developer is reading, identify some challenges to test an existing code, identify key areas where modularization is required then apply the modularization in refactoring phase, then add accompanying tests.

Third, the refactoring may come from ideas identified in challenges section. But that does not mean the developer can start writing tests followed by some refactoring. Those two steps will work in concert to make the whole development more easier.


To the best of our knowledge, We will state origin of the code, ideas and questions. It will be accidental not to mention source of source code used throughout this book –– but open to correct course if a reader provides us with a hint.

Some samples, or examples, may be sourced from publicly available sources such popular QA sites such as StackOverflow, Quora or Reddit. There are excerpt taken verbatim from GitHub library documentations, library code samples and library issues.

Every developer blog, or tech blog, that inspired the end result will also be credited doing so. In such cases, the contribution will be made clear. Hackers gist will also be referenced whenever it applies.

Examples from author's personal projects will not necessarily comply to this detail, for obvious reasons.

Table of Content

This table of content highlights key-areas that are subject of this book.


Technical documents have a multitude ways to organize content –– this is just one of them. The content structure focuses on key areas where developers tend to have roadblocks.


  • Objectives — Who, Why you may not need this mega tutorial
  • Setup — Making your testing environment suitable for work
  • Workflow — Task Automation Tools that helps with productivity
  • Project Layout — Conventions around layout of NodeJS/Express Projects
  • Modularization — Breaking big components into smaller manageable modules
  • Servers — Strategies to test NodeJS and Express servers.
  • Routes — Testing REST endpoints and Testing/Mocking authenticated routes
  • Configuration — Configuration tuning depending on environment
  • Middleware — Testing Express Middleware in isolation
  • Controllers — Testing Route Controller in isolation
  • Utility Libraries — Starting from a blank slate
  • Async – Callback — Strategies to Test and Mock Callback Functions
  • Async – Promises — Strategies to Test Promise
  • Async – Streams — Strategies to Test and Mock Streams
  • Models — Testing Mongoose Models without hitting the database
  • Services — The need of a Service Layer in NodeJS
  • WebSocket — Testing WebSocket without hitting remote endpoints
  • Background Jobs — Testing and Mocking long running background Jobs
  • Addendum — More on maintaining large scale NodeJS applications
    • Versioning
    • Documentation
    • Memory Leaks
    • Infrastructure
    • Deployment
    • Zero Downtime
  • References — and reading list


The previous table of content highlights high-level key areas where developers tend to have issues. The document will not only try to stick to those talking points but also adds opinions, as well as use case code samples.


Beyond test, build, deploy cycle –– maintaining large scale legacy NodeJS applications is a daunting task. It takes discipline, structure and rock-solid processes to succeed in this endeavour.

The discipline being able to incorporate automated testing and code review –– in development cycle. The main objective is to document ways to mitigate some challenges while writing testable code.


The objective of this book is to provide additional clarifications on reasons why to test. After the why, the what to test and how to test issues will be discussed as well.

Why Testing

Manual testing all features on large projects is tedious, and sometime not feasible. It worth to mention that You cannot guarantee the sanity of a piece of code, for that reason the whole system, unless it is thoroughly tested on every iteration.

Automated tests are a good way to remember how a bug has been resolved in past, therefore preventing same issues to happen in future. When well designed, they serve as garde fous when a piece of code is altered or removed.

It is always good to remember that test coverage doesn't guarantee bug free code. But rather, a memory of how issues have been resolved, therefore safeguarding same problem from happening again.

Last but not least, tests gives confidence while refactoring code. Indeed, test driven development makes sure any refactoring adds value.

What to test

Every piece of code written should be tested, in one way or another. The good way to start, is to test new code, when an addition is required. It takes time to write, refactor, and maintain old test cases. Just like buying insurance, it costs more but worth it.

For projects that lack good test coverage, paying off technical debts is a good start point. For legacy, or projects that lack tests, it is better to start small, on most used unstable parts of the project. Chances are, you will be working on those parts trying to fix issues anyways.

Ideally, test before writing code, not the other way around, and take it slow.

How to test routes without falling into integration testing trap?

How to test

Contradicting ideas around software testing are more around the how, than the why.

There is no one way to write good test cases. There are common traits shared with all test cases: Test Case > Feature > Expectations . What goes into expectation define success, or lack thereof, of your program. Features on other hand, depend heavily on what the test case is about. In case Test Case is Class, Features are going to be methods/functions of the Class. In case Test Case is a Method, outcomes of various parameters are going to play the feature role.

Since test cases are not cast into stone, it makes sense to refactor them. Refactoring is not re-writing, but better re-organization, better documentation or grouping similar code blocks into fixtures or test utility libraries.

Talking about the how, main program components are going to be tested in isolation. That is what Unit Testing is all about. By main program components, we mean Routes, Models, Controllers, Services and Servers.

To avoid Integration Testing Trap, anything that reads or writes to an external medium, will be stubbed. Stubbing is replacing the section that does the read, or the write, with a controlled function that mimics the read, or the write, behavior. Expected data(response), will me mocked. Meaning a pre-programmed data structure + data of the response of the stubbed function.

There is a lot of discussions around Unit Testing. At end of the day, it is better to automate repetitive tasks, manual testing in our case, that is what programming is all about.

Pro — Unit Tests

  • Steer pre-release confidence.
  • Gives code refactoring confidence.
  • Prevents unexpected bugs
  • Help developers(new) understanding code.

Cons — Unit Tests

  • Take time to write
  • Increase learning curve.

Key Takeaway

We can argue about cons and pros all day, everyday. These are some key takeaway from classical example of testing

  • It doesn't matter how you does it, as long as it produces results.
  • Testing is a preventive measure to run into big trouble.
  • While testing, focus on strategies that work for the moment. Explore enhancement often, in a way that doesn't hold back daily productivity.
  • Pareto Principle can be applied to overall testing as well: If 80% of problems are coming from 20% of un-healthy codebase, It is probably time you fix the health of that 20% of the codebase. Automated tests should serve as a memory of those fixes.


The key idea of this section was to layout not only why, but also what to test. We highlighted key pro/cons of testing –– especially unit testing.


Going down the rabbit hole ~ Is TDD dead? This question had some programmers debating the need, or lack thereof, of testing your code the TDD way. Kent Beck's Facebook Note makes you wonder which replacement could be suitable, if any. DHH pinned down what looks like an obituary, but doesn't rule out automated testing. You can also learn from Uncle Bob(Robert C. Martin) why TDD doesn't work.


The starting point of any project is environment setup. The setup includes the kind of machine suitable for a given project, programming language –– compiler, framework suitable to the project's needs, and the list goes on.

This section will about the choice of testing tools.


The choice of testing tools is not immune to heated discussion, really around anything tech, characteristic of many software development communities. Tech people always find a way to go tribal. To reduce the noise, the messaging of this book focuses on general consensus.

This section discusses tips to make your testing environment for work –– but not dictate which tools to adopt. We may not agree on tools to choose, but we always agree that we need some kind of tools.

Choosing your tools

At this point, the premise is that writing tests is a good thing, whatever testing school you subscribe to. We also agree to explore TDD or BDD school of testing.

This section is about hints on things to consider while choosing your ideal testing framework. The wide range of variety of testing frameworks comes with hefty price: choice paralysis. While choosing your testing tools, following key points will factor into your decision matrix:

  • Taste ~ no matter how advanced framework Y is, you may just enjoy to use framework X with all its shortcomings. If you have no external constraints such as your boss, go ahead and use whatever you enjoy using. The zeal and love of your tools, help to become a craft master.
  • Learning curve ~ Time is money. Time is scarce. It makes sense to choose tools with short learning curve. That saves time when transition to new tools arise, following a sunset, or change in requirements.
  • Stability ~ How stable the testing framework turns out to be plays a big role in time spent debugging testing code, versus doing actual work.
  • How easy to integrate with other existing testing framework. Testing framework tend to integrate other tools. The lack of documentation, or plug-an-play architecture, translates into more time hacking integration with other tools.
  • Community ~ There is nothing worse than being the first alien on Mars. The size of community using the framework, plays a big role in getting help you need. Things like documentation, solving framework bugs and sharing knowhow all depends on size and enthusiasm of the community around a framework.
  • Openness ~ Some open source software are iron-fist-led. Involvement in evolution of the framework declines because of politics around the product development. You may well remember reason that lead a team of engineers to fork io.js off nodejs runtime. You want stability and push green builds, not bad politics.
  • Completeness ~ Some framework allows you to bring your own tools, others provides all in one solutions. Framework like Jest come with Spies, Mocking and reporting enabled. Others like Mocha, provides you with barebone framework, which makes it easy to plugin your additional tools as you wish. You may pick whatever makes sense to you.


The following is example of tools used within the scope of this book. It is not required to have them verbatim. However, having exactly the same tools will help to learn techniques discussed in this document. At the end of the day, the choice of tools for a specific project will be incumbent on You, the developer.

Without any further due –– this is a non-exhaustive tools used in this document:

  • Test Runner — mocha. This book's choice of a test runner framework is mocha. Other frameworks such as jasmine-node can do a good job as well. Jest looks like a good alternative to test node too. jasmine-node ships with jasmine version 2 and beyond.
  • Test Reporter — istanbul is a reporting tool used to generate unit test reports.
  • Task Runner — npm and gulp scripts
  • Assertion Libraries — In addition to native assert, chai can complement as it comes with should and expect baked in.
  • Spy Libraries — sinon stubs and other library specific spies such as sinon-mongoose, etc.
  • Mocking Libraries — sinon has good tools for mocking. Other library specific mocking tools such as httpMock, mockgoose, etc. will be used to complement mocking needs that sinon mocking lacks. > There is well known libraries that help with mocking HTTP requests. The choice of a library, depend on how deep mocking should happen.
  • Mocking HTTP  — nock. This framework provides ready to use response. It is better to use when avoiding to overwhelm third party services with test requests. Requests are intercepted before they head out of the localhost. Which makes it ideal to when not wanting to spin up yet another server. supertest an integration testing framework. Similar to nock when it comes to test our own endpoints, not third-party endpoints. supertest is written atop superagent. For obvious reasons this library is designed for end-to-end or integration testing.
  • Instrumentation and test reporting – uses gulp-coverage
  • Auto reload(hot reload) using: nodemon, supervisor or forever
  • A library named plumber is used to log incidents with gulp tasks. There may be equivalents in other task runners as well.

Key Takeaway

The JavaScript ecosystem doesn't lack tools. Quite the contrary. To avoid [Analysis paralysis](

  • Adopt tools the community is massively adopting. These are known as stacks. There are tribulations while adjusting, but there is also support from other communities adopting the same tools.
  • When the environment of specific use case requires adoption of a different tools as the masses do, the tools with lesser learning curve but with higher support in developer community should be given higher consideration. The reasoning behind this argument is that the developer community makes tools better — all the time. If it is not there yet at the time of consideration, it will probably be better next morning.
  • When hacking on a new project, the choice of tools should be based on ones preference. The curiosity will drive painful moments.


This section highlighted key category of tools to choose from. The ideas was not to dictate the kind of tool to use, but provide key ideas to take into consideration while choosing testing frameworks.


Going down the rabbit hole ~ Difference between mocha and _mocha. The above was resolved using following issues on GitHub, Issue #262, Issue #496 and Issue #798, Source: unit test node code in 10 seconds, Source: Istanbul Cover, npm + mocha --watch not accurately watching filesjasmine vs. mocha, chai, and sinon, Mocking Model Level


Testing NodeJS Applications explores ways to embed automated tests in your daily workflow. Well crafted battery of tests helps gauge code quality improvements over the time.

Disambiguation The workflow is only for development — and has nothing to do with this node-workflow. The term workflow will refer to Development Workflow.


For every action, there is an equal and opposite reaction. Newton's Third Law.

Every code change triggers a chain of events, actions and reactions, before the code gets certified as ready to hit the production servers. The simplicity, or lack thereof, depends on how teams structure their delivery pipelines.

In this section, delivery pipeline is subject the workflow tries to clarify from a developer standpoint.

This chapter introduces ways to simplify, harmonize and orchestrate steps, as well as to automate most repetitive development tasks. Task runners such as npm , grunt, gulp and a variety of build and transpiler tools play a very big role.

The workflow used in this book will have following steps:

Code Change > Linting Task > Style Check Task > Tests Task > Reporting Task > Hot Reload Task 

Example: Workflow steps when code changes – It is possible to run some tasks in parallel


This chapter provides tools to get started, the tuning will be based on individual preference and project requirements. Reporting utility libraries will come in handy for evaluation based on certain metrics.

  • There is quite a lot of tools to choose from. The challenge is not lack of tools, but rather analysis paralysis. When not sure, take combination of tools that are popular in developer community.
  • When running global npm packages, npm becomes a liability. Global installation on developer machine leads to problems with automated deployments. There is no no indication to automatically tell npm that a package A is local. The same goes to global Package B. To remove any confusion, using all modules as local makes sense. Another way is to use containerized code. That is not in scope of current book.
  • Sanity check, also known as Integration tests, for client facing endpoints are slow tests. It is a good practice to do most of testing with unit tests. These are fast, do not need to chock the development by costly operations such as read/write to database etc. The ratio of unit tests versus system tests always become a source of heat debate.
  • Some task tooling do not support ESNext, or latest features. As a result, some tasks will have to transpile the code before doing any work. This adds overhead.

Things you may take into account to customize your workflow:

  • Auto reload after completion of code change. This is also known as hot reloading. Libraries that made rounds in nodejs developer community are:nodemon, supervisor and forever.
  • Automatically execute tests, after code change. In current context, there is a mocha task bound to a test runner that should be executed. The code change is always followed by linting task. This is an asset when working in weakly-typed code. Typescript makes this task obsolete.


This section provides solutions to problems and challenges stated in Challenge section. Since there will not be any testing at this moment, the refactoring section plays the role of Code, Refactoring and Testing at the same time.

It is possible to run tasks either from gulp tasks file, npm package.json's script section or command line. Most libraries come with a CLI utility and a set of commands that can be executed directly on the command line interface.

Refactoring –– Using the gulp's local instance

Running this to a remote server requires to install manually a global version of gulp. Many applications, may require a different gulp version. Normal gulp installation:

  npm install gulp -g # provides gulp to cli/terminal 
  npm install gulp --save-dev # provides 

Example: Install commands for gulp local and global availability

After gulp installation, the command becomes available system-wide. Installing package system wide may not be ideal, especially when you have quite a number of them. The package configuration does not have a flag to tell which package should be installed globally. Its configuration suggests all package are installed locally, local being a reference to actual project. It is possible to run any package from locality perspective, by leveraging .bin executable located under node_modules. The following configuration allows to run a local version of gulp.

"scripts": {
  "gulp": "./node_modules/.bin/gulp" 

Example: Adding local version of gulp [app-root]/package.json

PS: Using ./node_modules/.bin/gulp forces gulp to run the local version of gulp, instead of global version of gulp.

Refactoring –– Using npm and gulp

  • $ npm run gulp will use scripts > gulp version.
  • Conversely, adding ./node_modules/.bin/ to local PATH, make package available system wide. That is not advised. It may result in additional friction when moving to another system. Moving the code around systems happen more than we realize.

Refactoring –– Other Refactoring opportunities

Here is another list of other key areas that may have refactoring opportunities.

These key points may have additional content in near future.

  • Refactoring Linting Tasks
  • Refactoring Checking Style Task
  • Refactoring Running Test Task

The above tasks are just one in many other tasks that can be made better.

Refactoring –– Attaching chai, sinon and expect to global Object.

Mocha is available in a project's file after it gets loaded –– via import or require. When used with chai, there tends to be a repetition to load single libraries shipped with chai. Those are sinon and expect. This section is about refactoring chai in a way dependent libraries can be loaded in projects seamlessly.

There multiple ways to go with this approach, but the most compelling is using exports. This approach won't make a libraries default, but will help reducing boilerplate while testing.

    var chai = require('chai');
    //ES5 version
    module.exports.chai = chai; 
    module.exports.sinon = chai.sinon; 
    module.exports.expect = chai.expect; 

    //ESNext version
    exports const chai = chai;
    exports const sinon = chai.sinon;
    exports cont expect = chai.expect;

Example: Utility in test/utils/index.js

The previous pattern allows to use ESNext module on libraries still using CommonJS(require) style. To put that in perspective, chai will not only be available on global object of the project, but also import-able using ESNext module system.

The following is possible, when used with ESNext version in previous example

  import {chai, sinon, expect} from 'test/utils';

Example: Using generalized utility in test/index.spec.js

Refactoring –– Running mocha tests with npm

  • Some of most important steps, is to get your tests run in watch mode, and execute proper reporting. This section is going to cover just that, plus a couple tweaks that can save you a day or a week.
  • While searching for a task runner, stability ease of use and reporting capabilities come first.
  • Mocha might be easy to get started, but the drawback of choosing it: over-engineered.
  • Istanbul Coverage is added using local istanbul and local mocha, on test section.
  "test": "mocha -R spec  test/**/*spec.js",
  "test:compile": "mocha -R spec --compilers js:babel/register test/**/*spec.js",
  "watch": "npm test -- --watch",  
  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' 
           ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js"

Example: Utility in test/utils/index.js

The following produces no coverage information, and exits without writing coverage information.

  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' 
           ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js" 

Example: Test with coverage in [app-root]/package.json — the version that fails

  • When using istanbul cover mocha – Error: “No coverage information was collected, exit without writing coverage information”
  • To avoid the above error, and have reporting, use instead istanbul cover _mocha version
  • The code used to test current iteration on my private projects is:
$ ./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' 
  ./node_modules/.bin/_mocha –– --reporter spec  test/**/*spec.js

Example: Test with code coverage in [app-root]/package.json – The version that succeeds

Refactoring –– Reporting Task

Istanbul will be used to generate reports, as tests progresses.

# In package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# Then run the tests using 
$ npm test --coverage 

Example: Adding test script in [app-root]/package.json

Refactoring –– Hot Reload Task

Ho reloads on the server are delegated to tasks that start a server. As in previous sections, this task can be executed from 3 different places: gulp, npm or command line using other library's CLI utility.

The hot reload from a nodejs perspective is the ability of the code to load new code without the developer execute a hard restart. The hard restart refers to stop the server, and restart the server manually.

There are three libraries to look into to achieve this: supervisor, forever and pm2

The difference between these three libraries is that pm2 tends to be more production oriented, and the other two more of development oriented.

Key Takeaway

The workflow is critical to daily business of crafting good software. It may be made better over the time, just as the code should.

  • There are key areas that are widely adopted by developers community. It is better to start with those.
  • As time goes by, and good practices emerge, it is better to refactor your workflow to reflect that.


The workflow that includes testing is cornerstone to successful projects. Getting the workflow right –– especially when most of tasks are automated.


Down the rabbit hole ~ You are not alone if you have been wondering How to add global variables used by all tests in JavaScript?. You just have to remember, you may expect problems such as: Issue#86 about adding should on global object or Issue#891 How to make expect/should/assert be global in test files and be able to pass eslint

In the same category of reading list

Going down the rabbit hole ~ If you want to know more, there is a blog post: How to Solve the Global npm Module Dependency Problem that provided more solutions to this problem. Localizing some packages solves partially explosion problem.

Continuous Integration references

Going down the rabbit hole ~ Make sure you are not trapped in there ~ Continuous integration with Circle CI part I, part II – Getting started with Jenkins

Project Layout

The project layout is understood as file structure of a project. Project being a unit of an application. The project layout is a convention adopted by a team for their codebase structure.


Just as naming, the structure is pretty opinionated, and many organizations set a standard to copy from. This section demonstrates key layouts, but does not suggest which one to use in readers projects.

Before diving into mechanics of testing, let's look at possible layouts(main components) available in a typical NodeJS project.

The simpler, the better.


Amongst multiple ways to layout a project, two contenders worth to mention. Layout by feature and layout by kinds.

  • When the project is organized by feature, passing the code around is also done by feature. For example EmailService can be under Email feature. But a PaymentService of Payment feature may need intervention of EmailService to send messages after payment. Which may introduce some communication issues.
  • When a project is organized by king however, piece of code can be interchangeable. And the previous scneario may be resolved on the go, since the Services share the same directory. The payment feature may well be able to use both PaymentService and EmailService.

Use Cases

The easier way to layout the project is to group related items together. That is the strategy used in current documentation.

In another hand –– some projects are organized by feature. That is, every top-level feature has its own directory of Controllers, Services, Routes etc. That can be an advantage when time comes to run a feature is a microservice environment, but also a liability for architectures that support cross-cutting concerns.

  • Configurations
  • Utilities
  • Controllers
  • Routes
  • Models
  • Services
|-  config/
|   | - index.js
|-  utils/
|   | - index.js
|-  controller/
|   | - index.js
|-  routes/
|   | - index.js
|-  models/
|   | - index.js
|-  services/   
|   | - index.js

Example – Project Structure By Categories a.k.a Kind

Organizing the tests come into two flavors: 1) testing files along the source code files or 2) having a testing directory.

  • Testing files are along each code file –– this produces a fairly large number of files.
|-  config/
|   | - index.js
|   | - index.spec.js
|-  utils/
|   | - index.js
|   | - index.spec.js
|-  controller/
|   | - index.js
|   | - index.spec.js
|-  routes/
|   | - index.js
|   | - index.spec.js
|-  models/
|   | - index.js
|   | - index.spec.js
|-  services/   
|   | - index.js
|   | - index.spec.js

Example – Test Project Structure Side By Side

  • Testing files are put into their own tests top level directory, and mirrors the structure of the project.
|   |-  lib/
|   |   |
|   |   |- config/
|   |   |  |- index.js
|   |   |- utils/
|   |   |  |- index.js
|   |   |- controller/
|   |   |  |- index.js
|   |   |- routes/
|   |   |  |- index.js
|   |   |- models/
|   |   |  |- index.js
|   |   |- services/   
|   |   |  |- index.js
|   |-  tests/
|   |   |
|   |   |- config/
|   |   |  |- index.spec.js
|   |   |- utils/
|   |   |  |- index.spec.js
|   |   |- controller/
|   |   |  |- index.spec.js
|   |   |- routes/
|   |   |  |- index.spec.js
|   |   |- models/
|   |   |  |- index.spec.js
|   |   |- services/   
|   |   |  |- index.spec.js

Example – Test Directory that mirrors Project Structure

Key Takeaway

There is a multitude ways to organize file structure. It worth to mention a simpler, easy to setup, project file structure makes it easy to scale, when the need arises.


Project layout matters when it comes to testing –– choosing the right layout will improve the way developers feel and approach about testing the code they are writing.


Going down the rabbit hole ~ If you want to know more about structuring a NodeJS project, feel free to check this Example Project Structure, Yet another practical example is here


Modularization is a refactoring process that splits a larger codebase into smaller, reusable, highly independent chunk of codebases.

[Modular programming is a software design technique that emphasizes separating the functionality of a program into independent, interchangeable modules, such that each contains everything necessary to execute only one aspect of the desired functionality.](

Modularization is a refactoring process that makes a program modular –– this book will make good use of this technique quite often.


The notion of “Rotting Code”, especially on fairly large and pretty old applications, should be took with a grain of salt –– especially in an ever evolving environment JavaScript is experiencing these days.

Testing the code removes a fairly large probability to have repetitive bugs. Repetition is keyword. As the language, or framework matures, aging dependencies become prone to shoot through the roof the bug count. When people talk about “Code Rot” –– that is the rotting part of the equation.

The code may rot due to changes in dependencies. Some latest updates come with un-mitigated bugs, or new bugs in the update itself, or in one of dependent modules.

Isolation is key –– to avoid a havoc, splitting the application into smaller chunks, also known as modules, make grouping tests into smaller chunks easy, predictable to changes and most of all –– limit the damage to smaller portion of the codebase that can be fixed a bit faster.

The modularization used across this book focuses on splitting larger, hard to test code-blocks, into smaller independent easy to test modules.

Key Concepts

divide et impera

Digging deeper into old codebase that need modernization –– some aspects reveal themselves to the digger. Some codebase may well be poorly tested. The codebase may lack documentation and structure all together.

The sad reality is that codebase documentation may well be outdated, or does not align with needs at hand. The worst case scenario is –– some libraries may well be dead and gone. This book strives to bridge that gap. It focuses on refactoring. Modernization. Splitting larger chunks into smaller more manageable components or modules.

Large code bases tend to be hard to maintain compared to smaller ones. Obviously, NodeJS applications are no exception to this. Updates in 3rd party integrations, evolution of language or libraries are some of reasons you will be reworking your codebase for time after time.

The large aspect of large scale application combines Lines of Code(20k+ LoC), number of features, third party integrations, and the number of people contributing to the project. Since these parameters are not mutually exclusive, a one person project can also be large scale, it has to have a fairly large lines of code involved, or a sizable amount of third party integrations.

The large scale is in terms of feature count, LoC count and integration with third party systems.

Divide and conquer is one of old Roman Empire Army technique to manage complexity. Dividing a big problem into smaller manageable ones, allowed the Roman Army to conquer, maintain and administer a huge empire the world has ever known in middle age.

Modularization is one of techniques used to break down a large software into smaller malleable, more manageable components. In this context, a module is treated as smallest independent composable piece of software, that does only one task. Testing such a unit in isolation becomes relatively easy. Since its is a composable unit, integrating it into another system becomes a breeze.

exports Modularization is achieved by leveraging the power of module.exports(a.k.a export in ES7+). Modules comes in function, objects, classes, configuration metadata, initialization data, servers, etc.

index The index file plays a major role when working with directories. The default file to export is always in index. Implementations may, and should, have independent-unique filenames.

Key Takeaway

Modularization is just one of the refactoring techniques available. The modularization is important to keep things tight, improves testability and the overall project management from code perspective.

Modularization make the code more composable. A module can evolve into an independent library, or dependency. A quick example would be libraries such as lodash/underscore/rxjs/rambda or ramda.


The modularization technique is at the core of this book. It is used just as refactoring technique. The idea is to make every refactored code, as independent as possible. The Modular code is best for Composition.


If you want to dig deeper, feel free to read Export This: Interface Design Patterns for Node.js Modules Alon Salant, CEO of Good Eggs and Node.js module patterns using simple examples by Darren DeRider aka @73rhodes


In the scope of this book, the server will be defined as a script capable of handling client requests.

For instance, the script that listens and processes an HTTP request and provide an adequate response will qualify as an HTTP server in the scope of this book. The server comprises the logic to listen to a port –– and forward a request to corresponding handler for processing, eventually to produce a response.


Keeping in mind that one server should be listening to a port at any given time, on a same computer unit. Testing a server becomes a daunting task, especially when programmers do NOT want to spin up an actual server.

This section deals with simulation to test the start and stop of a server, as well as checking if the server can attach other application components.

As a quick reminder, NodeJS comes with modules to make servers bundled with the SDK. Modules such as http, https, websocket streams, just to name a few, constitutes servers building blocks, in the scope of this book.

The approach testing the server is two folds: Leveraging module export to modularize the server, second mocking anything that related to spinning up an actual server. The next section gives a quick look on how a server looks like. The next sections will be dedicated to modularize and testing the said server.


A nodejs server looks as the following snippet:

var http = require('http'),
    hostname = 'localhost',
    port = process.env.PORT || 3000;

var server = http.createServer(function(req, res){
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello World\n');

server.listen(port, hostname, function (){
  console.log(['Server running at http://',hostname,':',port].join());

Example: Code of a server in [app-root]/server.js. [source](

Server enhanced with expressjs framework server will slightly change and take a following look:

var express = require('express'),
    app = express(),
    port = process.env.PORT || 3000;

app.get('/', function (req, res) {
  return res.status(200).send('Hello World!');

app.listen(port, function () {
  console.log('Example app listening on port 3000!');

Example: Code of a server using expressjs in [app-root]/server.js. source

When a need to add WebSocket support, using and expressjs, example may take a following look:

var http = require('http'),
    hostname = 'localhost',
    port = process.env.PORT || 3000,
    app = require('express')(),
    server = require('http').createServer(app),
    io = require('socketio')(server);
//Listening on a port
// WARNING: app.listen(port) will NOT work here!
//Server Handler
app.get('/', function (req, res) {
  return res.status(200).send('Hello World!');

//reading messages on socket
io.on('connection', function (socket) {
  socket.on('message', function(payload){
    console.log(`Example SocketIO listening on port ${port}!`);

Example: Code of a server using in [app-root]/server.js. source

As requirement increases, this file becomes exponentially big. Most application runs on top of expressjs a popular library in Node world. To keep the server.js small, regardless of requirements and dependent modules, moving most of code into modules makes a difference.


When testing on a live server — everything has to be related to the test environment. However, when unit testing, most of resources have to be virtual. It is better no actual write to a file, or database occurs.

  • Since the server runs on one port at a time, running tests in parallel may fail. Moreover, the port the server listens to, may be used for development purposes.
  • The challenge is on how to test scenarios without actually spinning up a server.


Previous example shows how simple server initialization turns out to be. However, there are additional library to install such as expressjs.

  • Modularization of above two code segments make it possible to test the server in isolation.
  • When configuration is modularized, mocking the port becomes feasible, without changing the port in environment variable.


Refactoring applies statement stated above in modularization

var express = require('express'),
    app = express(),
    hostname = 'localhost',
    server = require('http').createServer(app);

app.set('port', port);
app.get('/', function (req, res) {
  return res.send('Hello World!');

server.listen(app.get('port'), hostname, function() {/* ... */});

//Modularization - this line makes server available in our tests. 
module.exports = server;

Example: Refactoring Server for modularity [app-root]/server.js source


Modularized version gives a more clean and entry point to test the whole server code. Before testing, it is imperative to mention that server.listen() function, can be stubbed, and mock the response. Stubbing functions that spin up the server is not a good idea while writing Integration Tests.

var http = require('http');
describe('server', function(){
    this.serverStub = sinon.stub(http, 'createServer', function(app){
      return Object.assign({}, http.createServer(app), { listen: sinon.spy() });
    this.server = require('./server');
  it('works', function(){
      expect(this.server.listen.called, 'Should have called Listen Function');

Example: Testing Modularized Server in [app-root]/test/server.spec.js

Going deep the rabbit hole ~ How to correctly unit test express server. There is also better code structure organization, that make it easy to test, get coverage, etc. at Testing nodejs with mocha

Key Takeaway

The notion of servers is not in terms of hardware, but the code part that plays a server's role. Handling of HTTP request or processing WebSockets are just a few examples when it comes to server notion established in this book. Since the server may blow out of proportion, modularization of key elements, delegation of some work to specialized handlers, keep the server code in check — making it easier to test and deploy.


There is no need to test working legacy code, if it was not for refactoring. Refactoring may be needed to reduce code smell, crack down on bugs invention, or modernize your codebase.

Increased modularization can be in play while refactoring. In following section the stress is more more on modularization of http module, with an introduction of a framework.

NodeJS application server comes into two flavors. Using native NodeJS library, or adopting a server provided by a framework.


Routes are address that indicate where a request handler is located. This addressing mechanism make it possible to split a larger application, into modules dedicated at performing one job at a time.


Real life node/express server, in addition to process requests, may also perform scheduled background tasks such as resource monitoring tasks. A server may well be coupled with a WebSocket endpoint. We keep in mind that the transport is not ONLY HTTP based.

With many tiers working in concert, adding a line or two can break the whole system. That is a characteristic of a poorly modularized, bad tested application.

This section will focus on edge cases such as testing node/express authenticated routes. It will also explore complexity that comes with nodejs Routes — such as scheduling background tasks or integrating with third party services. The focus on testing Routes goes hand in hand with breaking big code-blocks into smaller parts, and test those in isolation. Testing Authenticated Routes without falling into integration test trap.

This section will also introduce more complex topics such as mocking Asynchronous code's expensive constructs to achieve speed needed to run tests while developing the application.


The routes are fairly easy to setup when working with expressjs framework. Similar to the following framework powered server will be used from now on.

While following a simple principle “make it work”, you realize that route code becomes a huge, and locked into one simple file. Assuming all our models are NOT in same file as our routes files, following source code may be available:

var User = require('./models').User; 
/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);

Example: Typical Route With Model in [app-root]/server.js

The previous code sample is good enough when starting out. Like other examples, through this book, it overload the server file. To put things a little more in perspective, let take an example of a new requirement: – There should be a route to show details about an administrator

It is clear that the administrator is just one of the users available throughout the system. It is not by surprise that the route handler looks exactly as the route handler found at users/:id route.

app.get('/admin/:id', function(req, res, next){
  User.findById(, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);

Example: Another Typical Route With Model in [app-root]/server.js


This section deals with challenges while testing expressjs Routes.

The most obvious way to structure tests is to send requests directly to the router. Unless it is time for integration testing or end-to-end testing, this approach may take time to complete tests on every code change, and may introduce havoc on developers end, while setting up testing environments. We will refer this to this approach as integration testing trap. Generally speaking, the challenge relies on how to avoid integration testing trap in context of Unit Testing.

Without further due due, the following challenges arises:

  • How to test the business logic code, and not only returned responses
  • How to Mock requests to Payment Gateways, Third Party Authentication services to avoid overwhelming those services with tests requests.
  • How to Mock database read/write operations while not spinning up a database. This helps especially on a Continuous Integration Server, where a database connection would not necessarily be available.
  • How to make tests that cover exceptions and missing data structures.


Modularization of Express routes

The easy way to mitigate that, is grouping function that are similar into same file. Since the service layer is sometimes not so relevant, we can group functions into a controllers.

module.exports = function(req, res, next){
  User.findById(, function(error, user){
    if(error || !user){
      return next(error); 
    return res.status(200).json(user);

Example: Modularizing Router As a Controller in [app-root]/controller/user.js

var getUser = require('controller/user'),
    router = require('express').Router();
router.get('users/:id', getUser);
router.get('admin/:id', getUser);
//exporting the 
module.exports = app;

Example: Modularizing Router in [app-root]/routes/user.js

Both controller/user.js and two routes can be tested in isolation.


Refactoring technique Manifest routes

//requiring a directory, will seek for index.js at top of directory 
var routes = require('./routes'); 
//routes will have index.js at /routes directory.

Example: Manifest Router in [app-root]/routes/index.js

var express = require('express'),
    router = express.Router();
router.get('/', function(req, res, next) {  
  return res.render('index', { title: 'Express' });
module.exports = router;  

Example: Modularizing Router in [app-root]/routes/index.js

// routes/users/index.js
var router = require('express').Router();  
router.get('/get/:id', require('./get-user.js'));'/new', require('./new-user.js'));'/delete/:id', require('./delete-user.js'));  
module.exports = router;    

Example: Modularizing Router in [app-root]/routes/user.js

“The most elegant configuration that I've found is to turn the larger routes with lots of sub-routes into a directory instead of a single route file” – Chev source

//route handler
module.exports = function (req, res) {  
  // do stuff

Example: Typical Route handler in [app-root]/routes/user.js

// routes/users/index.js
//update when routes/users/favorites/ adds more sub-directories
router.use('/favorites', require('./favorites')); 
/* ... */
module.exports = router;

Example: Modularizing Router in [app-root]/routes/user.js

//Using route and controllers' route handler
var router = require('express').Router(),
    getItem = require('./controllers/catalogues').getItem,
    createItem = require('./controllers/catalogues').createItem;

module.exports = router;

Example: Chaining Requests [app-root]/routes/index.js


There is literally every tool for testing any layer of nodejs/express stack. Those tools may delegate some of logging to third party tools, or come with ready to use solutions.

Moreover, there is a need to have a sense of what has been tested, versus what has not been tested. For that, we will need a test coverage tool. Like in logging use case, testing libraries come with reporting tools. However, we may need to hook third party reporting tools into our codebase.

Some of the extra tools used in this section are:

  • plumber –– This is a small library that plugs into gulp task runner. It makes it possible to log errors on console while running tests.
  • gulp-coverage –– This library is also plugs into gulp task runner. It provides instrumentation capabilities to generate meaningful test reports

Testing –– bird eye view

Testing routes without spinning up a server

Routes should be served while testing. The server may not be up all the time, especially when testing in a sandbox environment such as CI server.

var express = require('../'), 
    request = require('./support/http');

describe('req', function(){
  describe('.route', function(){
    it('should be the executed Route', function(done){
      var app = express();
      app.get('/user/:id/edit', function(req, res){
        // test your controllers with req,res here (like below)
      //triggering the actual test 
      request(app).get('/user/12/edit').expect(200, done);

Example: Avoid Spinning up a Server in [app-root]/routes/user.spec.js

Example from so and supertest. Supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a big deal.

To sum up, Spend extra time to write your tests, it pays off. Effective tests are written before writing code. If you already have the code, good time to add tests is before adding more code.

On long run bugs are expensive for any project. Take it slow

Testing –– Authenticated Route

Testing authenticated routes may be a challenge on its own. The Middleware part of the equation will help uncover some hidden secrets to be successful.

Testing –– Mocking Request

  1. Using node-mocks-http, we can use Request/Response object similar to one provided by http node native library
//url = endpoint to test
var request = httpMock.createRequest({method: method, url: url});

Example: Mocking a Route's HTTP Request [app-root]/routes/user.spec.js

Testing –– Mocking Response

//initialization(or beforeEach)
var response = httpMock.createResponse({eventEmitter: require('events').EventEmitter});
//Usage: somewhere in tests
controller.useReqRes(request, response);
response.on('end|data', function(error){
  //write tests in this close.

Example: Mocking a Route's HTTP Response in [app-root]/routes/user.spec.js

It is problematic to rely on mocked response. The obvious reason is mocked response is not in sync with changes that are constantly made to the data model, nor the actual data samples from a real database.

Since the whole response is mocked, the test doesn't penetrate into underlying functions. Ideally, it should. There is a reason why still mocking a response still makes sense. The other functions, or layers, are going to be tested in isolation to compensate on shortcoming of this mocking strategy.

Testing –– Mocking WebHook Request

The second approach is to use ngrok. The library's npm home page is at this web link.

This method answer the following question: “How to to Mock Responses without hitting the server”. The Use Case bets on using ngrok library to achieve this. This is not the only alternative available, nor the best, but one approach in many other alternatives.

The key difference with other strategies is that this library can be used while testing WebHooks. In reality, a WebHook is just another Request, but the assumption is that this Request was sent by just another server, and may not necessarily come from own servers.

This is introduces another level of complexity, especially when it comes to authentication. Ideally, issuing unique keys to the rightful server endpoints makes more sense.

At some extend, We will see why ejecting routers into independent controllers and middleware makes sense. A WebHook endpoint may well use same functionality as any other client connected to our application. The key difference is in authentication of the request — remember that this aspect has been delegated to an authentication middleware.

  • The only thing that is mocked here is JSON response.
  • To avoid hitting databases, Controller Action can be spied upon, stubbed and ngrok will mock respond with mocked response.
  • Nock is good if you are doing one of the following:
    • Hitting a third Party REST/SOAP API: Payment, Sending Emails, Tax, Shipping API
    • Updating Third party API from version v1.x.x to version vN.x.x, or downgrading
    • Integrating with OAuth and you are testing behavior of your application based on some results.
    • Expecting WebHook from another system to hit your Endpoint
  • Nock may not be suitable for one of the following:
    • Testing your own endpoints that integration testing
    • When testing your own endpoints, it is better to Mock Models(see below)
const expect = require('chai').expect,
      nock = require('nock'),
      // controller action method
      getUser = require('../index').getUser,
      // mocked response => module.exports = { data: {} }
      response = require('./response');

Example: Adding nock mocking module into the testing file

describe('Get User tests', () => {
  afterEach(() => { /** restore + cleanups */ });
  beforeEach(() => {
    nock('').get('/users/octocat').reply(200, response);

Example: Mocking a request at /users/octocat

  it('Get a user by username', () => {
    return getUser('octocat').then(response => {
      //expect an object back
      expect(typeof response).to.equal('object');
      //Test result of name, company and location for the response
      expect('The Octocat');
      expect(response.location).to.equal('San Francisco');

Example: Testing on a Mocked Response from Nock library

Key Takeaway

It is not an over-estimation to state the router is a corner-stone, if not building block, of the whole node/express application stack. There is no expressjs if it was not for its router. To make the whole application easy to manage, the Fat/Model Skinny/Controller strategy has to be applied first. The extreme removes the Fat/Model part, and adopts instead a Healthy/Model with introduction of a service layer.


Testing fat routes raises the bar when it comes to testing routes. When Routes are modularized and broken down into easily mock-able units –– such as Controllers and Services –– the tests not only becomes faster to execute, but also easier to reason about in coding and testing.


Going down the rabbit hole ~ More on organizing your nodejs application: An Intuitive Way To Organize Your ExpressJS Routes

On the authenticated routes the following documents may help

Going down the rabbit hole ~ Local Authentication with Passport and Express, BDD-TDD, How to test with Auth0 protected route

Additional material on the Reading list


The famous 12 App Factor popularized managing configurations as part of the codebase. Every environment –– dev, qa, staging or production –– has own configurations that are decoupled from the system's codebase itself.

The configuration comes in as one of system dependencies.


From previous chapters, code was mixed with configurations. That makes it hard to test in isolation, automated or manually, introduces headache while deploying to different environments, and list goes on.

This section gives tips on How to manage configuration files for testability, easy deployment and management. It also gives an example of how to test existence of configuration keys.

Modularization of configuration makes it fast to deploy application on various environments, reduces friction coming with file change at deployment time, while increasing the security and integrity of sensitive configuration parameters such as KEYs, and confidence to break fewer things at deployment time.


The “naïve” way to manage configurations is to embed variables in the codebase itself. This code snippet is drawn from previous examples in servers code snippet example.

var mongodb = require('mongodb');
var mongoose = require('mongoose');
    mongoose.Promise = require('promise'),
    DB_URL = ''; //configuration 
mongoose.connect(DB_URL,{useMongoClient: true}); 

Example: Configuration in [app-root]/server.js


In eventual case this code remains un-changed, pointing to a test database URL would be complex. Technically, each unit test depending on this code, will open a new connection to a live database. We can do better.

Some application configurations are located on a machine file server such as: /etc/config/[app-name]/config.ext. This works. The problem raises when trying to set up a new developer machine –– or in environments that change quite frequently.

Ideal case where every programmer can to deploy latest code to a specific environment, most of the time: staging. In some ways, democratizing deployments, also gives developers access to some sensitive data, authentication data for instance.

Storing production keys? In most cases, different teams share directories via revision systems. How can we manage configuration data as a part of program, giving access to developers ability to work with code, but limiting access to production ready configuration keys?


It is better to move this configuration inside the code, ideally at root: [app-root]/.conf. This approach may be a bit tricky –– the root of the application should never be committed to shared repository. This avoid to leak private data to the public, in case of public and open source projects.

Following the previous approach makes it possible to initialize configuration variable at deployment time. It is possible to use a third party configuration manager service, even injecting the variables at run time –– as CLI arguments to your program.

Most developer tools such as ESLint make good use of approach discussed in the previous paragraphs.


The quick and dirty way to manage front line configuration files is described in the next code sample.


Example: Top Level Configuration dotenv in [app-root]/.env

However, using the .env in every source code file makes good ingredient to refactoring disaster. To prevent that from happening, it becomes clearer to rely on a modular approach: there is library dedicated to load configuration data in form of an object accessible from anywhere in the program.

The next code sample, illustrates how to achieve configuration modularity

//Other Configurations
var Config = { 
    env: process.env.NODE_ENV || process.env.ENV || 'dev',
    host : process.env.HOST

//MongoDB configuration
var Mongodb = Object.assign({},{
    host : process.env.MONGODB_HOST, 
    dbname : process.env.MONGODB_DBNAME,
    port : process.env.MONGODB_PORT,
    username : process.env.MONGODB_USERNAME, 
    db : process.env.MONGODB

//export configuration as module 
module.exports = Object.assign({}, Config, {
  mongodb: Mongodb 

Example: Configuration in config/index.js

After the previous change –– it becomes easier to change. The configuration object becomes easier to mock, being module itself.

var mongodb = require('mongodb'), 
    mongoose = require('mongoose'),
    mongoose.Promise = require('promise'),
    Config = require('./config');
//using value from configuration file
mongoose.connect(Config.mongodb.db,{useMongoClient: true}); 

Example: Configuration in [app-root]/server.js

To make the application work, there is another aspect of configuration that was not discussed throughout this book: enhance nginx to support WebSocket communication. The nodejs/express with WebSocket using library is our use case.

  # more configurations go in this place ...
  location /{
        # 3 lines to serve websockets
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

Example – Upgrading HTTP to support WebSocket Specifications

The previous nginx configuration file does the following actions –– sequentially

  • Tells nginx which version to upgrade to
  • Tells nginx to upgrade HTTP to version 1.1
  • Tells nginx to upgrade upon receiving socket flash request


It is not obvious why testing configuration would be beneficial. That becomes even problematic when most variables come from process.ENV . Using dotenv and placing .env configuration file on root removes ambiguity. The unit test should verify if DB_URL is available in Environment configuration variables.

describe('#Config', function() {
    beforeEach(function() {
        this.config =  app.config;
    describe('#MongoDB', function(){
            assert.isObject(this.config.mongodb, 'MongoDB should be an Object');
            assert.isString(, 'Host should be a String');
            assert.isString(this.config.mongodb.dbname, 'DBName should be a String');
            assert.isString(this.config.mongodb.db, 'DB should be a String');

Example: Testing Configuration in test/config/index.spec.js

Key Takeaway

The 12 App Factor became a manifesto when it comes to reason about configuration. Since the configuration management is similar to code management, there is no reason why not to automate their validation, or verification.


Testing configuration may not seem an obvious thing to do –– but making sure required configurations are available and well formatted sets a good starting point.


Going down the rabbit hole ~ Source: Chriss Lea's – Proxying WebSockets with Nginx

More on the reading list

Going down the rabbit hole ~ Other people who worked on same problem: How to store Node.js deployment settings/configuration files?, Managing config variables inside a Node.js application, Configuring Node.js Web Applications… Manually || Convict.js


The middleware is a piece of code that does some work and delegate remaining work to the remainder of router logic. This definition is of course minimalistic and limited to expressjs in context of this book only.

The result of a middleware is to group logic of repetitive first to execute code, in context of handling a request, into re-usable, granular, easy to test in isolation piece of code.


Some well known and widely used Express middleware are probably to authenticate a request and enabling CORS. The reason behind this reasoning is based on the fact that, in one way or another, most nodejs/express applications have to implement those two middleware.

CORS stands for Cross-Origin Resource Origin. This protocol(or security enhancement) makes sure HTTP serves or denies access to resource that are not coming from the same Origin. more...


Since the introduction mentioned CORS –– it worth to add that every route in nodejs/expressjs will need waive rights to access certain resources. That is especially the case if the application is serving as a REST API endpoint.

The code that would solve the issue looks a bit as following

var User = require('./models').User; 
var allowedHeaders = `X-CSRF-Token, X-CSRF-Strategy, 
                      X-Requested-With, Accept, Authorization, 
                      Accept-Version, Content-Length, Content-MD5, 
                      Content-Type, Date, X-Api-Version`;
/** code that initialize everything, then comes this route*/
app.get('/users/:id', function cors(req, res, next){
    //to waive credintials we have to change response headers
    res.set('Access-Control-Allow-Credentials', true);
    res.set('Access-Control-Allow-Origin', '*');
    res.set('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
    res.set('Access-Control-Allow-Headers', allowedHeaders);
    res.set('Content-Type', 'application/json');
    res.set('Access-Control-Allow-Max-Age', 3600);
    if (req && req.method === 'OPTIONS') {
    } else {
        //hands execution to next handler
  }, function handler(req, res, next){
    User.findById(, function(error, user){
      if(error) return next(error);
      return res.status(200).json(user);

Example: Router with waive clearance to access endpoint from another Domain


From the previous code snippet, having 1000+ routes each of those having a cors middleware would be a headache. Moreover, one route can in fact need more than one middleware. Without further due, let see challenges we identified so far:

Challenges testing middleware is nothing different from testing controllers.

  • Copy cat the same code around is not viable, most certainly, not testable.
  • Routes may have more than one middleware
  • There should be a way to skip(mock) middleware, especially if the middleware is mocking database calls.

The next step will provide tricks to modularize the CORS middleware, and provides possibility to use multiple middleware in any given route.


The first step in this process is to identify major categories in our code sample, and move those categories into their own library — the module.

The cors middleware can be moved to the [app-route]/middleware/index.js library. Using the index.js may be problematic if we start getting more and more middleware in the project. Since it is better to have a file per module, and for sake of avoiding confusion while editing multiple index files, the following tweak can do the trick:

  • The /middleware/cors.js file is created and the code of middleware is modularized in that file.
  • The /middleware/index.js serves as the gateway to the middleware module, and exports every module in the middleware directory.


var getUsers = require('./controller').getUsers;			

Example: Import one of a controller functions


This section focuses on how to to mock Request and Response Objects while testing ExpressJS middleware.

  • Spying if certain calls have been made
  • Make sure the requests don't hit the remote server
var sinon = require('sinon'),
    chai = require('chai'),
    expect = chai.expect;

describe("Routes", function() {
    describe("GET Users", function() {
        it("should respond", function() {
            var req,res,spy;
            req = res = {};
            spy = res.send = sinon.spy();
            //to return a value => res.send.
            getUsers(req, res);
            spy.restore(); // If the function will be needed in other places. 

Example: Mocking a request and response –– [source](

  • Particular Case: How to mock a response that will be used with a Streaming Source.

Key Takeaway

Middleware plays a big role in node/express world. However, they may be a source of frustration while testing the code that heavily uses them. To turn that frustration into joy, the following are to always remember while testing any node/express or any other stack that uses middleware:

  • Middleware as a special kind of utility library. They are independent. They always do just one thing. They delegate additional processing to next handler, be a middleware or route handler.
  • When grouped into their own files, stubbing middleware becomes as easy as stubbing any function.


Middleware makes it possible to complement routes by hooking into the execution life-cycle. Testing middleware is not different from testing full-fledged routes –– including the modularization strategy.


Controllers are the glue between the View and Model, in context of MVC paradigm. This time though, we introduce Controllers as functions re-usable by two or more routes — and serves as bridge between business logic(in service, models, of file systems) and request routes.


The controller introduced in this chapter, plays the same role as the controller in MVC paradigm, with an exception: the view is not necessarily HTML based. It may well be JSON or XML format. The controller glues the model layer — or the application's state — to the view. The controller contains business logic. To keep things tight, It is even better to group, as much as possible, business logic into services.

Having a controller in every application is not an obvious choice. Given that the current application is a server based application –– the routes makes more sense. The controller layer may come in a picture, when multiple routes are literally using copy cats.

In such a scenario –– the routes can be modularized by introducing a controller layer.

Introducing controllers makes sense as just as introducing a middleware. The exception between controllers and middleware is the order of execution. Controllers come second after all middleware have already executed. The controllers makes it possible for 2 or more routes to share a same request handler.

This section focuses on how to modularize route's request handlers into smaller, testable controllers in a nodejs/express application context.


A classic route code-block is organized as in the following example:

      function middleware(req, res, next){ /**Logic return next(req, res)*/}, 
      function controller(){ /** Controller Logic */ }

Example: Depicting where the Controller ranks in a Request Handler

A typical full fledged controller will look as in the following code sample.

var app = require('express')(),
    cors = require('lib/middleware'), 
    authenticate = require('lib/middleware'), 
    Messenger = require('lib/email'),
    redis = require('redis'),
    redisClient = require('util/redis')(redis);

app.use(cors);//using cors to make support CORS'/messenger/:email/send', authenticate, function(req, res, next){
  //preparing the message 
  var options = req.params, 
      payload = Object.assign({}, options, req.body);

  //Using Model to validate a user
  User.findById(req.session.userId, function(error, next){
    if(error) return next(error); 
    new Messenger(options).send().then(function(response){
      redisClient.publish(Messenger.SYSTEM_EVENT, payload);
      //schedule a delayed job, and more tasks can be added here. 
      return res.status(200).json({message: 'Message Sent'});


To avoid integration testing trap, the emphasis should be on testing the code of the controller, as opposed to the output (response) of the controller.

Should avoid into falling into integration testing trap. Decouple “Unit Testing” from “Integration Testing” and vice-versa.

  • It is hard to test not so well organized Controller.
  • If the Controller is not testable, moving some parts in their own libraries makes sense.
  • Controllers tightly coupled to the model, reading/writing to the filesystem or integrating with a third party library, introduce extra work. To mitigate the challenge, delegating those tasks to a service layer or utility library make a strategic sense.
  • Since controllers are integral part of a router, there is a challenge while mocking request and response objects.


The first thing that catch the eye is how controller is even embedded within the router definition. It is clear that when the router definition becomes too big to deal with, the first step to break the code down will be to pop the controller out of the router. But that will become clearer in refactoring process explained below.

  • Depending on how the project is organized, controllers ejected from routes belonging to the same category may also be grouped in the same category by default. E.g: [app-root]/controllers/user
  • In Some cases however, we may witness similarities, if not carbon copy, within ejected controllers. Those similar controllers become good candidates to merge into one controller.
  • The index of every controller category reveals single controllers defined within the category. That means /controllers/user/index.js exports functions defined in user/create.js, user/update.js etc.
  • Most of controller business logic can well be grouped into a service layer. More on the need of a service layer will be explained in the service layer chapter.


The refactoring process will take two categories of tasks. The first will be ejecting the controllers off the routes. The second will be organizing controllers into modules. The introduction of services will be reserved introduced in the service layer chapter.

All of these refactoring have to lead to better testability.

First things first, providing a name to the controller before ejection gives a clear idea of what comes next. The name has to reflect the functionality it provides to the router as clearer as possible, as well as simpler as possible.'/messenger/:email/send', authenticate, function sendMessageByEmail(req, res, next){ /**Implementation*/ });

Example: Naming the Controller function

The next set is to eject the function, and used the name in the router. That results in a bit cleaner router construct.

function sendMessageByEmail(req, res, next){ 
};'/messenger/:email/send', authenticate, sendMessageByEmail);

Example: Ejecting the name controller out of the router construct

The last step in the process deals with creating a controller file structure, moving the code over to its new location, and importing the new controller into the router. The end result looks as in the following example.

var sendMessageByEmail = require('controller/messenger/send-email');'/messenger/:email/send', authenticate, sendMessageByEmail);

Example: Importing modularized controller into the router

Even thought the file structure looks a bit easier, the only part the becomes more easily to mock, or to test is the router. The controller, not so much. The callback structure in controller is our “bête noire”. To make the controller a bit cleaner, we are going to rely on a technique that purges callback out of the controller construct, into a more cleaner promise based construct.

As a primer, mongoose models normally looks as in the following code sample.

new User(options).save(function(err, user ){
  return next(user);
}); //<- Callback that will be completely replaced with the Stub.

Example: Replacing Callback with a stub

However, mongoose models also support promise based constructs, that are way easier to test. There is no gain in terms of performance doing the transition, there is also no loss. It is a stalemate refactoring. Nothing to loose, nothing to gain. At least from performance standpoint.

new User(options).save().then(function(user){}).catch(function(error){});

Example: Replacing a callback using a Promise

Alternatively, it is possible to group user model manipulations into a same special kind of utility library, named a service.

  function UserService(){ } = function(){
    return new Promise(function(resolve, reject) {
      return new User(options).save(function(error, user){
        if(error) return reject(error);
        return resolve(user);

Example: Encapsulating User Model actions into a UserService service

Last but not least, the new look at our controller will take a similar revamp:

module.exports = function(req, res, next){
  User.findById(req.user, function(error, next){
    if(error) return next(error); 
    new Messenger(options).send().then(function(response){
      redisClient.publish(Messenger.SYSTEM_EVENT, payload));
      //schedule a delayed job 
      return res.status(200).json({message: 'Success Message'});

Example: Multiple Callback post finding

The callback solution can be turned into a promised version, as a way to prevent a callback hell. THe same strategy can well be used to eliminate a callback hell scenario.

//can be easily turned into: ---- the problem is Object returned by UserService.find()
module.exports = function(req, res, next){
    then(new Messenger(options).send()).
    then(new RedisService(redisClient).publish(Messenger.SYSTEM_EVENT, payload)).
    then(function(response){ return res.status(200).json(message);}).
    catch(function(error){return next(error);});
//To combine responses, above can be merged one after another. 

Example: Transforming callback into promised version


Testing the controller forces to test some key areas that do have dedicated chapters in this book. Those key areas are testing the Model layer, Mocking Request and Response objects and Asynchronous code.

The layers that are going to be discussed in their own chapters, are not a subject of this chapter. It will be a requirement to read the chapter on testing the Model layer, testing Asynchronous code and testing Routes to be at ease while reading this chapter.

  • Wrapping Initial Function OR CallThrough
  • This Callback replaces any FindById callback.
  • Which means, We will not be able to execute computations inside the callback(i.e Messenger.Send() —– etc.)
  • Somehow we need to wrap previous callback inside the new callback

To put things in perspective, the end result of ejecting controller from its router resulted in code similar to the one in example below:

var app = require('express')(),
    sendMessageByEmail = require('controller/messenger/send-email');'/messenger/:email/send', authenticate, sendMessageByEmail);

Example: Sample of resulting Router code after Controller ejection and modularization

From techniques discussed on testing router in isolation, both authenticate middleware and sendMessageByEmail functions were stubbed, and their request/response objects mocked. We have also seen that the end result is not ideal, unless both the middleware and controller are covered with their own tests in isolation.

The next couple of paragraphs are going to be dedicated to testing the controller code in isolation. Two categories of test scenario are also going to be discussed. The first scenario, is when the code is using callback. The last scenario is when callback have been refactored into promises and services.

Both scenario either read/write to database, read/write to a third party service or read/write to local file system. In all of those cases and scenario, stubbing will take the same approach and use the same testing utilities provided by sinon library. In the same context, mocking response will use the same libraries.

To avoid repetition, the strategy to handle those cases is discussed in testing Async code chapter and will not be repeated in this chapter.

Key Takeaway

The controller layer come from specialization of Router handlers.

  • The best way to approach any testing is to move every independent object into a smaller testable object.
  • Using in Memory database makes tests run faster, but then the memory becomes a commodity.
  • To save on memory and test response time, mocking expensive components becomes a requirement.
  • The service layer makes it easy to mock and test all most expensive features in isolation. > Read the following comment on recommendations on mocking/testing database for other people's opinions
  • To reach a dead-end while testing controllers means some progress is being made. And workarounds, also known as refactoring, are needed to break the impasse.


The Controller layer makes sense only when it serves the modularization of the Routes. It is possible to use Controllers as WebSocket handlers, Background Handlers. Testing the Controller becomes even easier when cross cutting concerns are moved to Services –– and mock-able.


Going down the rabbit hole ~ For more on mocking requests, this article can be a good starting point – Mocking Request Promise – with Mockery

More details in testing controllers

Going down the rabbit hole ~ Passing data between Promise callback, Combine data of two async requests to answer both requests, Bluebird has a .join() function ~ works better than Promise.all()

Going down the rabbit hole ~ How to test express controllers from TerLici blog

Going down the rabbit hole ~ with following articles: Nock a primer on David Walsh Blog, Using Nock ~ This approach works more than the way I test WebHooks with pre-programmed responses, Unit Testing Express/Mongoose App routes without hitting the database

More in the reading list section

Utility Libraries

The utility library holds code that is used in various places, but that does not fit into a Service, Controller or Middleware. Those other layers are going to be needing the utilities to perform some sort of computation.


The code that goes into utility libraries is most of the time the most important. A healthy code coverage makes sure the utility library can be shared across projects –– safely. The utility libraries are a good place to start testing, if these two conditions are met:

  • You are tasked with a new large scale legacy project, that counts virtually zero Unit Test cases. The code rotted at a point you are afraid to add even a comma on first file you open.
  • You have no requirements(features, bugs) that requires your immediate, but expecting new requirements to land on your desk in a week or two.


//Utility to format User name.
module.exports.formatName = function(data){
  return Object.assign({}, {
      first: data.first, 
      last: data.last, 
      full: [data.first, data.last].join(' ')

Example: Utility function example located in [app-root]/util/index.js


Ideally, the utility library should be a collection of pure functions, with few to no dependency at all. The number of dependencies in one function may introduce circular dependency problems.

The number of utility functions may grow exponentially, especially when a premature optimization has been done. To avoid this from happening, to apply the DRY principle only on the code that occurs 3 to multiple times makes sense.

Pure function is a function that 1) produces the same output for same input and 2) has no side effects — does not mutate any state wikipedia

Circular dependency is “a relation between two or more modules which either directly or indirectly depend on each other to function properly” wikipedia

DRY principle ~ Don't Repeat Yourself — this principle reduces the number of copy/cat in source code. wikipedia. DRY should always be KISS as well.


The reason utility library seems a good starting point for modularization. The snippets that appear at least in three places(copy/cats) can be moved into functions, and moved to a common library. If there is a pattern in such functions, then there are good chances of moving those functions into a module.


The code in utility library is supposed to be there after applying DRY on some locations. The refactoring strategy makes sure there are no functions doing the same thing, but with different names.

Another refactoring aspect is to eliminate circular dependency. The simpler way to solve this issue, is to make most of the functions pure.


describe('util#formatName()', function(){
    it('returns first, last and full name', function(){
        var name = formatName({first: 'Allan', last: 'Poe'});
        expect(name.full)'Allan Poe');

Example: Testing a Utility Function in [app-root]/test/util/index.spec.js

Key Takeaway

The utility library has cross-function utility code. This sounds repetitive but that is also the beauty of them. – Whenever possible, existing utility libraries such as underscore or lodashcan be leveraged to accomplish most, if not all daily tasks. – When necessary, utility function should be as small as possible, as independent as possible. – The rule of thumb is DRY. If the code has been copied over the 3rd time, that is a pretty good indication that It should belong to utilities.


The reason utility libraries seems a good place to start with, lies in fact that even the worst project have them. The utility libraries are isolated from the rest of the code in most cases, are relatively easy to read and easy to make.

Technically, any copy/paste code is a good candidate to become a utility.

Async — Callbacks

A callback is a function that executes once the caller function is done performing its main job.In other words, the caller function hands next execution step the the callback.

Callback can be as simple as [pure functions]( to as complex as thunks.


This section covers strategies to test and mock most callback functions in isolation.

It worth to mention that middleware, as far as expressjs is concerned, are callback too. To hit two birds with one stone, it would be better to build our use case around middleware.


The code sample shows a quick example of how callback can easily turn into a callback hell.

  app.get('/account/:id/profile', function(req, res, next){
    //do things and send a response 

Example: Sample of Async Callback [app-root]/routes/user.js

As a quick example, we can add code to read database, add a couple of asynchronous data processing functions or third party integrations functions and we have a good cocktail to disaster.

app.get('/account/:id/profile', function(req, res, next){
  User.findById(, function(error, user){
    if(error) return next(error);
    markedejs.renderFile(template, params, function(error, html) {
      if(error) return next(error);
      mailgun.messages().send(options, function(error, body) {
        if(error) return next(error);
        redisClient.publish('system:event', payload, function(error){
          if(error) return next(error);
          return res.status(200).json(user);

Example: Callback hell-ish [app-root]/routes/user.js

The above is an illustration of how the classic example using a callback can go off the rails fairly quickly.Sentiment while testing code similar to the sample provided above may be range from frustration to nightmare, to not touching the code at all. Till a bug is filed, and the root cause lies somewhere in the chain of callback.


Depending on degree of third party integrations, or use of third party libraries in the code, additional dependencies may be required to test a callback. In the scope of this book, sinon is our go to library for mocking, spying and stubbing. chai will provide special assertion tools.

  • Top level callback that executes complex operations via its nested callback is hard to test, hard to debug and hard to maintain.
  • Overall, nested callback are the ones that do complex operations such as writing to filesystem, needing network or persisting state to a database.
  • Those complex operations need to be tested in isolation. Once that is done, their tasks has to be stubbed one by one, and their response objects mocked all together.

The challenge is how to achieve that at lower cost. The cost is in terms of not only time, but also collateral damage that comes with any code change.

It is possible to rely on nodejs native assertion library. If you need more assertion tools such as assert.isAtMost or assert.deepEqual then chai dependency can be added in test toolkit. In other cases, it is not really needed.


There is no special use case of modularizing callback per see. However, there are plenty opportunities to modularize other parts that use certain callbacks.

The callback paired to a router may be either a middleware or a controller. In such a case, the callback can be modularized following strategies laid out in Controllers and Middleware chapters.

In another hand, copy/paste callback are good candidate to utility library. Likewise, the Utility chapter lays down strategies to modularize callback as independent re-usable set of functions.


Callback hell and how to tame the dragon

When callback has too much nesting — then we have a callback hell scenario. The tricky part is to establish how much nesting is too much — or which level of nesting considered to be OK.

The rule of thumb “if it ain't broke, don't fix it”.

There is an un-written rule to keep the number of nesting below 3 depth. However that depth can be achieved, or getting closer to three levels of depth, qualifies as a good strategy to tame that dragon.

too many callback are inter-twined, the end-result is a callback hell Modularization can help with solving a part of the problem. More advanced techniques though, can be replacing restructure the code with Promises. Another refactoring technique may be adopting [higher order functions]( In case of complex processing, operations can either be moved to utilities or Services.

A higher order function is a function that satisfies at least one of the following: takes a function as an input – returns another function (a.k.a thunk)

This steps shows two use cases –– delegating callback functionality to the middleware layer, or utils as an independent module. Each module(or utility function) can either be tested in isolation and stubbed everywhere else.

Refactoring the callback hell using promises will be subject of the chapter Async Promises.


Any spec will have to have the basic setup. That setup have to have the function to be stubbed, and stubbing and assertion libraries. Moreover, a template to testing construct will be found throughout the test cases.

These two main concepts can be depicted by the following code sample

var fs = require('fs'),
    sinon = require('sinon'),
    assert = require('chai').assert;

//in any.spec.js
descibe('fs', function () {
    //other describe and it constructs

Example: Testing libraries required and testing template in some .spec.js file

The fs library is native to nodejs and has most utilities required to manipulate the filesystem. It makes sense to provide file system stubbing example. There is no other function that makes sense for that matter, than fs.unlink or fs.write.

As an example, the idea is to replace fs.unlink with a stub, coupled with a spy that we can check if a file that should be deleted indeed has been deleted. This test makes sense in a way that you don't want actual files to be deleted from file system while testing. Not only because hard drive I/O cost more, but also since you don't want to delete some files by accident.

   this.unlink = sinon.stub(fs, 'unlink', function(filepath){ return true;}); 

Example: Stubbing file deletion operation in some .spec.js file

The function that deletes a file has to take a callback. callFunctionThatDeletesFiles describes such a function. To make sure the test executes to the end, done callback is added to the test. Sometimes these kind of tests end with a timeout errors, then you have to debug and understand why callFunctionThatDeletesFiles is not able to execute passed in callback.

// Somewhere in your code. 
describe('unlink()', function(){
    it('removes a file', function (done) {
        callFunctionThatDeletesFiles(function next(){
        	assert(fs.unlink.called, "unlink() has been called");

Example: Testing a function that deletes a file –– with a callback

The previous procedure can well be applied to the following function and callback found in the sample code:

User.findById() ~ advanced stubbing and mocking techniques are provided in the chapter on testing models

//next will be the callback to simulate real life function
function next(fn, params){
 return fn.apply(this, arguments);
 //check if params is the one that has apply instead and apply it.

this.modelFindByIdStub = sinon.stub(UserModel, 'findById', next);

Example: Stubbing User.findById() with a possibility to execute next custom embedded callback

markedejs.renderFile() ~ provides a special use case stubbing a callback, and mocking a response. html object has to be valid html string.

this.renderFileStub = sinon.stub(markedejs, 'renderFile', next).yields([null, mockedHTMLString]);

Example: Stubbing markedejs.renderFile() with a possibility to execute next embedded callback

mailgun.messages().send() ~ the obvious special case of this construct is to make sure that the stubbed send() is built atop a stubbed messages(). The technique is to stub messages() and mock response as another stub of send() function.

this.messagesStubb = sinon.stub(mailgun, 'messages').returns({
    send: sinon.stub().yields(null, mockedMessageBody);

Example: Stubbing mailgun.messages() with a stubbed .send() callback that executes next embedded callback

Key Takeaway

Callback forms a building block of asynchronous JavaScript constructs. They provides ability to have a last say when the action eventually finishes execution. However, a good strategy to deal with callback is needed.

  • To test callback to an expensive function, stubbing the caller makes sense. Not only this makes tests faster, but also prevents creating chaos in the program. Think of a test about deleting files. When not handled properly, actual deletes can wipeout a pretty good amount of files on the filesystem.
  • There is a hate/love relationship with callback in developer community. However, we have to recognize progress they bring to JavaScript ecosystem, especially when dealing with asynchronous nature of the environment.


This section provided hints on the way to approach testing Callback. Testing Asynchronous code is easy when you have a clear strategy and adopt a right approach. Callback are a good examples of asynchronous code –– Events, Streams and Promises all use callback upon completion. Good understanding of Callback give better foundations to code and test them.

Async — Promises

The MDN defines promises as: > “The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.”. more ...


The promises can be used to refactor a callback hell. This section covers strategies to test and mock Promise constructs. The idea discussed in this section, is to replace the function that makes external request by a Stub. The stub has to return either a Promise with a mocked response, or simply a Mock of Resolved Response.


Let's consider a simple form of Promise construct. It uses Fetch API, but variations can use

fetch API is native to browsers ~ but not available in nodejs runtime. However, there is a library that ports fetch API to nodejs runtime. node-fetch

//Lab Pet fetches data from a url
      new Service().doSomethingWith(response); 
      return response; 
      new ErrorHandler().doSomethingWith(error);
      return error;

Example: Promise is an Async


The challenge to test promises is not as different from the ones stated in testing callback. There is an additional level of complexity when the callback makes HTTP requests, makes calls to database, writes to filesystem or integrates with third party APIs.

Another challenge is testing, or mocking, intermediate promises. Intermediate promises are chained promises that are resolved between the first and last resolve block.

Each of the stated challenge will have some level of coverage in next chapter.


The modularization allows us to break some functions and classes into re-usable components. Therefore testable in isolation or stub-bable as needed.


The refactoring of the promises have to focus on keeping things simple. There is always to room for improvement. Larger .then() or .catch() constructs can leverage the utility libraries. This is especially the case when data transformation is involved. If for some reasons a function is ejected, testing that ejected function in isolation becomes easy, as there is no need to glue it to the whole promise construct.


The “function that makes external request” is fetch. Replacing fetch with a stub, allows next Async functions defined in .then() and .catch() constructs to continue with execution. There are various ways to deal with such a situation, depending on how deep tests have to cover. Some of those techniques are examined in following test examples. Before that, let's examine the structure of test cases.

The first section of test case, involves dependencies needed to make this test a success.

var sinon = require('sinon'),
    bakedPromise = require('./fixtures/baked-promise'), 
    mockedResponse = requires('./fixtures/mocks/mocked-response');

Example: Importing fixtures in any .spec.js file

The second section of this test case, shows how the test case is organized.

//in any.promise.spec.js
descibe('GET /url', function () {
    //other describe and it constructs

Example: Structure of a .spec.js file

In all cases, you will need to restore stubbed fetch function. I always like to start with After/AfterEach block, so that I don't somehow forget to add it.

  afterEach(function(){ this.fetchStub.restore(); });

Example: Tear down –– resetting fetch stub in any .spec.js file

One way to approach mocking a response, is to return a plain simple Promise. The other, similar way, to replace fetch with a stub that returns a Promise. The last, is to rely on Promise baked in Stubbing utility. Those three ways are expressed in following beforeEach snippet.

    //one way: return a baked promise
    this.fetchStub = sinon.stub(window, 'fetch').returns(bakedPromise(mockedResponse));
    //other way: stub fetch with a function that returns a baked promise
    this.fetchStub = sinon.stub(window, 'fetch', function(options){ 
        return bakedPromise(mockedResponse);
    //yet other way: using stubbing utility that resolves to a promise
    this.fetchStub = sinon.stub(window, 'fetch').resolves(mockedResponse);

Example: Stubbing for successful functions

You may have noticed above stubbing are expecting cases where the function is supposed to succeed. But what can you do, when you are tasked to check if right error handlings are being executed? That is where failure test cases come in. You can always to group failing test cases in one suite, or re-initialize stubs case by case. The following lines displays some ways you can do it.

   //one way
    this.fetchStub = sinon.stub(window, 'fetch', function(options){ 
        return bakedFailurePromise(mockedResponse);
    //another way: using 'sinon-stub-promise's returnsPromise()
    //PS: You should install => npm install sinon-stub-promise
    this.fetchStub = sinon.stub(window, 'fetch')
    //same way: without sinon-stub-promise is possible for sinon version >= 2.0.0
    this.fetchStub = sinon.stub(window, 'fetch')

Example: Stubbing for un-successful calls

Finally, the actual testing may look something like one of the following:

it('works', function(){
    //use default function like nothing happened
        assert(this.fetchStub.called, 'fetch() has been called');//or 
        assert(window.fetch.called, 'fetch() has been called');

Sources: Stubbing JavaScript Promises with SinonJS, How to Stub Promoses Using SinonJS

  • bakedPromise() is any function that takes a Mocked(baked) Response and returns a promise
  • This approach doesn't tell you if Service.doJob() has been executed.

Testing intermediate promises is essentially to “combine” Intermediate Resolved promises into a series of mocked promise results. It is possible to make all intermediate promises successful, as it is to make one of the intermediate promise fail.

Key Takeaway

The promise API not only constitutes an effective way to deal with callback hell, but also makes it easy to reason about while programming asynchronous software. There are countless libraries that makes it easy to test promised source code.

  • When no writes are involved, mocking a response is a good way to test promises outcomes.
  • When writes are involved, stubbing write function is also a good way to test the promises.
  • It is possible to stub a function, and still cover the promise callback with a set of tests.


Testing Asynchronous code is easy when you have a clear strategy and adopt a right approach. This section provided hints on the way to approach testing Promises.


Going down the rabbit hole ~ Stubbing JavaScript Promises with SinonJS ~ on Johny Reeves' blog. Jake Archibald's article on JavaScript Promises

Async — Stream

Streams are found everywhere in nodejs/express applications. For starters, the Request and Response objects are Streams. Streams make it possible to write memory intensive applications, while keeping a low memory footprint at runtime. That makes stream processing popular when dealing with data processing on large file.


Streams follow a set of APIs rules, namely implementing a : _read(), _write() or both, or _transform() of the stream library. Data is normally processed as it comes in. For that the listener is .on('data', handler). Waiting for data makes stream processing async.

This section is about testing read, write and duplex, and transform streams.


Creating a Readable Stream can be as easy as reading a file or virtual file:

  var reader = fs.createReaderStream(filepath);

Example: Stream to read a file

On other hand, creating a Writable Stream can be as easy as response with expressjs as in following:

  function(req, res, next){

Example: Writing to Writable Stream from Readable Stream

The previous example uses Pipe operation on two streams to channel data from one stream(readable) to another(writable) readable -> writable.

It is possible to have one stream being able to be both writable and readable. These streams are known as duplex streams. Two ways streams(Readable and Writable), are most of time designed to make transformation, whence transformers are duplex streams.

The Piping pipeline becomes as in the following schema:

  readable -> transformer -> transformer -> transformer -> writable

Example: Stream pipeline with Duplex Transformer Streams

An intermediary stream can be placed between a readable and writable stream –– the intermediary stream may have ability to change stream payload before it outputs to the next stream. The intermediated Stream is commonly known as Transformer Stream.

The Transformer Stream class looks a bit like:

const inherit = require('util').inherits,
      Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    this.options = Object.assign({}, options, {objectMode: true});
    //<= re-enforces object mode chunks, this.options);
inherits(MetadataStreamTransformer, Transform);

Example: Custom Stream Creation Template

The following code sample implements the _transform()function. The function responsible to alter stream payload(chunk) before writing to the next stream.

MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();

Example: Template to duplex stream implementation

The _flush() function is required to make extra step if necessary, when the chunks are all processes.

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}

Example: Flushing the pipe at end of input

  • Isolation of the above function:
it('_transform() - works', function(){
    var Readable = require('stream').Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, 'push', function(data){
                    assert.isNumber(; //testing data sent to callers. etc
                    return true;
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    expect(tstream.push.called, 'push() has been called');

Example: Stubbing push method of a duplex stream


The challenges working with streams vary. There are challenges that come with a writable stream. Those challenges may differ from challenges for a readable or a duplex stream. In this chapter, we will focus on the following challenges:

  • How do Stubbing differs from Mocking in Streams context
  • How do Stubbing differs from Spying: Spies/Stubs functions with pre-programmed behavior
  • How to tell if a function has been called with a specific argument? For instance, is it possible to guess if res.status(401).send() was called with no argument?


There is no special library dedicated to streams. That means we will not have a dedicated directory with name streams — at least in scope of this book.

Streams are a technique to do heavy lifting data processing while maintaining a small memory footprint. If not embedded directly in a controller, the good place to put those is in utility module or service module. It is also possible to have stream processing with model libraries.

However, it makes sense to follow guidelines used in other libraries such as models or services. Ideally though, some special tools using a stream API may be moved to utilities directory. There, they should fall just into the design and guidelines of the library.


There will be no special refactoring technique, other than KISS. The streams are a complex subject, and can well be overwhelming to newcomers. But when function implementations are small, clear and right to the point, using streams may well be as easy as writing a new pure function.

There are well maintained libraries for every aspect of stream processing. Making good use of those can be beneficial.

Testing — How to Stub Stream Function and Mock Stream Response Objects

describe('', function(){
    it('works', function(){
      //Adding Mocks+Stubs here

Testing — How to Sub Stream Function and Mock Stream Response Objects

The general structure of a stream processing program(server or client):

var  gzip = require('zlib').createGzip();//quick example to show multiple pipings
var route = require('express').Router(); 
//E.g: express 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
        let rstream = fs.createReadStream('./several-TB-of-songs.json'); 
            pipe(new MetadataStreamTransformer()).
        //handling errors in the pipes => next handles error to next handler     
        rstream.on('error', (error) => next(error, null));

Example: Sending stream down the wire –– post stream transformation

  • How to test the above code: small pieces.
  • gzip and res won't be tested, but stubbed and returns a writable+readable streams
  • MetadataStreamTransformer will be tested in isolation
  • MetadataStreamTransformer._transform() will be treated as any other function, except it accepts a stream
  • new MetadataStreamTransformer() won't be tested, but stubbed and returns a writable+readable stream
  • fs.createReadStream won't be tested, but stubbed and returns a mocked readable stream
  • .pipe will be stubbed, and returns a chainable stream.
  • rstream.on('error', cb) Stub readable stream with a read error, spy on next() and check if it has been called, on write error.
  • Mocking fs.createReadStream to return a readable stream
//stub can emit two or more streams + close the stream
var rstream = fs.createReadStream();
sinon.stub(fs, 'createReadStream', function(file){ 
    assert(file, 'createReadStream() received a file');
    rstream.emit('data', "{id:1}");
    rstream.emit('data', "{id:2}");
    return false; 

var pipeStub = sinon.spy(rstream, 'pipe');
//Once called this above structure will stream two elements: good enough to simulate reading a file.
//to stub ```gzip``` library: another transformer stream: producing 
var next = sinon.stub();
//use this function| or call the whole route 
getter(req, res, next);
//expectations follow: 
expect(rstream.pipe.called, 'pipe() has been called');

Example: Stubbing Reader Stream Creation Source: trick from @link

  • What is the difference between readable vs writable vs duplex streams? Substack Stream Handbook
  • Readable produces data that can be feed into Writable stream => has readable|data events + extends by implementing ._read
  • Writable can be .piped to, but not from(e.g: res in above example). => has writable|data events + extends by implementing _.write
  • Duplex: Goes both ways: Transformer stream is duplex. Has both events + extends by implementing ._transform

Testing — How Stubbing HTTP request works

  • When to use this:
    • Testing all routes
    • Making assertions about the nature of response returned(utilities included)
    • Server internally provided, and booted on demand: so there is no need to start the base server.
  • When not to use this:
    • While running integration testing with a need to hit the database.
  • Using a Mocking library such as node-mocks-http makes sure to pre-program request/responses, with ability to test if expected functions/logic has been executed along the way
  • Since Mocked Object created by such a library is a stream, you can also use it in piped streams context:
// Add promise support if this does not exist natively.
if (!global.Promise) {
    global.Promise = require('q');//or any other promise library 

Example: Replacing the global promise with a custom

var chai = require('chai');
var chaiHttp = require('chai-http');
chai.use(chaiHttp); //registering the plugin.

Example: Using Chai Plugin to mock HTTP requests

var app = require('express').Router();
require('./lib/routes')(app);//attaching all routes to be tested
//use this line to retain cookies instead 
var agent = chai.request.agent(app);

Example: Registering App HTTP request replacements

//initialization of app can be express or other HTTP compatible server.
it('works', function(done){
        .put('/user/me') //.post|get|delete
        .send({ password: '123', confirm: '123' })
        .end(function (err, res) {
        //more possible assertion 
        expect(req).to.have.headers;//Assert that a Response or Request object has headers.
        expect(req); // .html|.text 
        expect(res).to.redirect; // .to.not.redirect
        expect(req).to.have.param('orderby');//test sent parameters
        expect(req).to.have.param('orderby', 'date');//test sent parameters values 
        expect(req).to.have.cookie('session_id');//test cookie parameters

Example: Testing HTTP headers

//keeping port open 
var requester = chai.request(app).keepOpen();
it('works - parallel requests', function(){
    Promise.all([requester.get('/a'), requester.get('/b')])
    .then(responses => { /**do - more assertions here */})
    .then(() => requester.close());

Example: Testing parallel request

Key Takeaway

Streams are counter-intuitive to test. An effective strategy to mock and stub some key functions however makes the tests more effective. Streams are also hard to work with, when getting started. Motivations to keep using them comes from how effective they turn when processing large datasets.


Testing Asynchronous code is easy when you have a clear strategy and adopt a right approach. This section provided hints on the way to approach testing Streams.


Additional content on Stubbing HTTP Requests function and Mocking HTTP response.

Going down the rabbit hole ~Stubbing HTTP Requests, Mocking Express Request/Response, HTTP Response assertions for the Chai Assertion Library

Additional content on Streams and using Vinyl files.

Going down the rabbit hole ~ Check glob to know more about using Glob Stream to initialize all files came in as a stream, How to TDD Streams, Testing with vinyl for writing to files

Additional content on Stream processing and Mocking the Stream APIs

Going down the rabbit hole ~More on readable streams(Stream2), QA: Mock Streams, Mock System APIs,Streaming to Mongo available for sharded clusters


Models are everywhere –– In addition to mirror database tables, the models can bring in more structure to the application –– as state. The database bound model put together with application specific metadata product the. The state will be referred to the data model combined to application context values.


In this section is about testing model. By testing, I mean unit testing models in isolation, without hitting the database. Testing models while hitting database is known as Integration testing. Such tests can either be done to test scenario of data integrity, or via RESTful API integration testing. So that will not be covered here.

Since our premise is not to hit the database, the database server will not be needed. That alone increases dramatically a test runs from beginning to end. By that, we will Stub functions Mongoose functions supposed to hit the database, and Mock database response(data).


The model code is a pretty interesting use case of async code. It covers well the callback type, transition to promised constructs as well as a portion of streaming data from the database.


The challenges to test models do not include testing the validity of schema, at leas in context of this book.

Even though most challenges testing models have been addressed in testing async code, more specific model specific can be found in every single corner. We therefore going to show what challenges that might be encountered while testing code that talks to a database.

The library used throughout this book when it comes to work with databases is mongoose. Most of techniques discussed are therefore going to be about mocking mongoose models without hitting the actual underlying database.

  • Since models use callback as a means to integrate programmer defined logic, the challenge is similar to the challenge encountered testing async code with callback.
  • Just as promises are used to deal with callback hell, same principles can be applied in case of models as well. Then come same challenges as testing promises in general.
  • Mongoose makes it easy to chaining queries. The challenge associated to this feature, is to be able to stub a chainable function. The stub should be chainable either to other stubs or to mongoose API functions.
  • The last is the challenge that comes with test libraries. Being familiar with one library, does not always translate into success using another library.

Mocking Requests using nock. Sinon Stubs to Simulate a response from Mongo::UserSchema::save() function.

Spy a Model, when a certain gets called(e.g: save), and use stubbed function. while stubbing a function,we can specify the function to call original callback.

Mock-All tools like Mockery, come with a challenge. When a test fails due to unhandled exception or rejection, after hook may not be able to de-register and reset to default functions, which may cause program disruption in some cases.

If Mocked out function changed behavior of file system for example, failing to reset the function to its initial state, may break the whole system, Resulting in either rebooting the test cases or the whole system, depending on extent of the damage.


Modularization of models is not different from other libraries. There is a special responsibility node/express models have in addition to defining the schema. They also execute reads/writes to the database.

It is possible to separate schema definition from actual models. That decision will be made from project to project. However, it is a common good practice to have one model per file.

As always, the index of the models directory should expose all models defined within a directory.


Testing models will require heavy mocking and stubbing. This will be especially the case when avoiding to spin up a database instance. The common refactoring technique used along the way will be delegation. Most of data processing or data transformation will be delegated to utility libraries.

Testing –– How to Stub Mongoose Function and Mock Document Objects

  • Unless decided ahead, hitting database slows down Unit Tests.
  • Writing to database all of these changes is not ideally advisable.
  • Alternatives is to Mock mongoose/mongodb connections.
  • The way I do it: Using sinon-mongoose

Testing — Mocking Database access functions

Functions that access or change database state, can be replaced by calls to functions spied upon, and call custom functions that may supply|emulate similar results.

There are a couple of solutions that can be used, one of them is sinon

//Model should be an actual model, eg: User|Address, etc
ModelSaveStub = sinon.stub(Model.prototype, 'save', cb);
ModelFindStub = sinon.stub(ContactModel, 'find', cb);
ModelFindByIdStub = sinon.stub(ContactModel, 'findById', cb);
//cb will be the callback to simulate real life function
function cb(fn, params){
 return fn.apply(this, arguments);
 //check if params is the one that has apply instead and apply it.

Example: Stubbing Each Model's query function [app-root]/test/model/user.spec.js

Nock library is used to mocking Requests. Sinon library is used to provide “Spy”-ies, Stubs. The stubbed function will use fixtures as expected outcome from a Mongo::UserSchema::save() function call.

Rule of Thumb 1. Spy a Model, when function is called(e.g: save). 2. Use stab-ed function to simulate original function. 3. It is possible to call original callback in a stabbed function.

The strategy is to stub the function that calls the database, and always make sure the async function, if any, continues the flow of the program. In case there is a value, or object|function, resulting from stubbed function a mocked value replaces expected function call outcome.

Testing –– Order to Stub Mongoose with sinon-mongoose

  • Replace Default promise with Promise A+
  • Replace Mongoose with Sinon-Mongoose
  // Using sinon-as-promised with custom promise 
  var sinon = require('sinon'),
      Promise = require('promise');

  // Adding sinon-mongoose to mongoose 
  var mongoose = require('mongoose');

Example: Mocking Mongoose with sinon-mongoose in [app-root]/routes/user.spec.js

Without mock library:

var mongoose = require('mongoose');
describe('UserModel', function(){

Example: Modularizing Router in [app-root]/routes/user.js

With Mock Library: –– without promises i.e with Callback

Replacing default Mongoose Promise library.

var mongoose = require('mongoose');
mongoose.Promise = require('bluebird');

//to replace underlying mongodb driver, do instead: 
var uri = 'mongodb://localhost:27017/mongoose_test';
// Use bluebird
var options = { promiseLibrary: require('bluebird') };
var db = mongoose.createConnection(uri, options);

Example: Connecting to an Actual Database [app-root]/server.js

Loading required library


Example: Required Libraries to Mock a live Database [app-root]/routes/user.js

Example of model definition

  //in model/user.js
  var UserSchema = new mongoose.Schema({name: String});
  UserScheme.statics.findByName(function(name, next){
      //gives: access to Compiled Model
      return this.where({'name': name}).exec(next);
  UserSchema.methods.addEmail(function(email, next){
      //works: with retires un-compiled model
      return this.model('User').find({ type: this.type }, cb);
  //exporting the model 
  module.exports = mongoose.model('User', UserSchema);        

Example: Modularizing User Model in [app-root]/model/user.js

  • Testing
//in model/user.js
var UserSchema = new mongoose.Schema({name: String});
mongoose.model('User', UserSchema);   

Example: Compiling UserSchema into a Model [app-root]/model/user.js

Subsequent behaviors such as save() and find() go after before().

// test.spec.js
describe('UserModel', function(){
        //model is declared in model/user.js
        this.User = mongoose.model('user');
        this.UserMock = sinon.mock(this.User);

Example: Testing UserModel in [app-root]/test/model/user.js

The following strategy fails:

  • Mock works on Objects i.e models
  • Save() is defined on Document, and not the model object itself.
  • This explains why to we spy on prototype: sinon.stub(UserModel.prototype, 'save', cb)
  • Without mock, is becomes impossible to chain any extra function such as .exec() or .stream()
  • So a double Stub is required in such cases
  • Sinon.stub(UserModel.prototype, 'save', cb).returns({exec: sinon.stub().yields(null, results)});
  • Alternatively use .create() instead
  • Same here – but requires a lot of changes to existing codebase
  • Or use [Factory girl]() like in answer
describe('save()', function(){
    it('works', function(){
        var self = this; 
        var user = {name: 'Max Zuckerberg'};  
        var results = Object.assign({}, user, _id: '11122233aabb');
        //yields works for callback
        this.UserMock.expects('save').withArgs(user).yields(null, results);
        sinon.stub(this.UserModel.prototype, 'save', cb);//<- should be done in mock fixture 
        new this.User(user).save(function(err, user){
            //add all assertions here. 

Example: Stubbing .save() in [app-root]/test/model/user.spec.js

describe('find()', function(){
    //.chain adds possibility to test various chainings in a find query. 
    //this will be frequent in apps that fetches more than they write

Example: Testing .find() Chained queries in [app-root]/test/model/user.spec.js

Note: Models should be created once, across all tests.

  • This error: OverwriteModelError: Cannot overwrite `Activity` model once compiled. means one of following occurred:
  • got the caps wrong while importing a models. => import User from 'model/user'
  • got wrong definitions of models: var userSchema = new Schema({}); module.exports = mongoose.model('user', userSchema) <= new schema and not just schema(this was my case)
  • got models twice(two times recompilation): module.exports = mongoose.model.User || mongoose.model('user', userSchema);
  • QA: StackOverflow

Testing/Mock model pre-hook

With Mock Library: –– with promises

//in UserModel.js
then(function(result) {
    //do Things 
    return result;

Example: Code Sample with .exec() construct in [app-root]/model/user.js

//in user.model.spec.js 

Example: Loading Required Libraries in [app-root]/test/model/user.spec.js

//in user.model.spec.js, the describe section looks like:  
describe('UserModel', function(){
    it('works', function(){
            .resolves('SOME_VALUE'); //.yields(null, 'SOME_VALUES')       

Example: Testing Chained Query Functions in [app-root]/test/model/user.spec.js

With Mock Library: –– paired with streams

UserModelMock.find().stream().pipe(new Transformer()).pipe(res);

Example: Streaming from Database Code Sample in [app-root]/model/user.js

//in user.model.specs.js

Example: Loading Required Libraries in [app-root]/test/model/user.spec.js

describe('UserModel', function(){
    it('works', function(){


Example: Testing UserModel in [app-root]/test/model/user.spec.js


Example: Loading Appropriate Libraries in [app-root]/test/model/user.spec.js

describe('UserModel', function(){

Example: Testing UserModel in [app-root]/test/model/user.spec.js

Testing —  Chained Model Functions.

It is not so obvious to test such a code block:

  Order.find().populate().sort().exec(function(err, order){ 
    /** ... */

Example: Sample of Chained Model's Query Functions in [app-root]/test/model/order.js

Keyvan Fatehi managed to hack something amazing:

//Slight modification of original code
var promise = sinon.stub(Order, 'find').returns({
    populate: sinon.stub().returns({
        exec: sinon.stub().yields(null, {
            id: "1234553"

Example: Stubbing Chained Query Functions with Promise in [app-root]/test/model/order.spec.js

Testing — Chained Model Function with Promises

What can happen if a promise is involved?

  Order.find().populate().sort().exec().then(function(err, order){
    /** ... */

Example: Modularizing Router in [app-root]/routes/user.js

There is a library that solved that problem, that can be added on top of Sinon. If Sinon is not a part of testing framework, this cannot be a viable alternative.

The library name is sinon-mongoose, and may requires to have sinon-as-promised to resolve promises.

The code above can be tested using mocks:


Example: Import utility libraries in any .spec.js file

//code borrowed from the library:  
  .populate('props_1 props_2')
  .resolves('SOME_VALUE');//Or rejects
//MongooseModel : Order

Example: Mocking Chained Mongoose utilities

Key Takeaway

Models are building block of data driven applications. Models persist the state on medium such as databases or file systems. Well testes models do not only make an application efficient from development standpoint, but also increase user experience from customers perspective.

  • Testing Models is a bit hard and counter-intuitive.
  • Mocking database response as well as stubbing model functions makes it king of easier to tests. The tests becomes a bit faster, since there is no actual database server to spin up, nor actual read/write to execute. Which saves time.
  • The actual database reads/writes should only be made available in integration tests(or system tests)


Testing model function without spinning up the database is feasible. It makes sure unit test scenario run faster. But it comes with a cost: there is a lot of Mocks.


Going down the rabbit hole ~Mocking database calls by wrapping Mongoose with Mockgoose,StackOverflow response that works for stabbing, Getting started with NodeJS and Mocha, SinonJS – a Mocking framework, Mocking Model Level, A TDD Approach to Building a Todo API Using Node.js and MongoDB


The service layer comes in two major flavours. As gateway to third party services integrations, or an abstraction layer on application business logic. The need to have a service layer in NodeJS application comes from a specialization of logic that handles cross cutting concerns such as connecting to a database, logging or integration with third party services.


When you integrate with a payment processor, Stripe for example, the number of instances and function calls within your application translate into difficulty you may face, when Stripe goes out of business, or change its function signature for example.

The same applies to when a model is used multiple times, with almost same signature, when the naming changes in one version to another, the difficulty to rename and retest all function usage instance increases as well.

To mitigate this repetitions, a service layer proved to address these kind of issues pretty well. The service layer makes it possible to use libraries we don't control, the same way as libraries we control. Changing a signature of a function in a library that we don't control, only affects one instance of a library we control: the wrapper function implemented in our service.


The following code snippet has following characteristics

  • The program uses the service to retrieve user details from database
  • The next step is to make a payment based on some parameters from our database
  • When the payment is successful, the database has to be updated
  • When all above goes well, a confirmation email should be sent to both payer, and payee about the transaction
  • Push Notification is added to the queue manager, to notify the payee on the end of chatroom.
  new UserService()
    .then(user => new PaymentService(user.account).makePayment(order))
    .then(payment => new OrderService(order).updatePayment(payment))
    .then(order => new EmailService().sendOrderStatusUpdate(order))
    .then(payload => new RedisService(redisClient).publish(RedisService.SYSTEM_EVENT, payload))
    .then(response => res.status(200).json(message))
    .catch(error => next(error))

Example: Sample of Usage of Promise and Services in [app-root]/controller/user.js


There is no one size-fit-all type of solution when it comes to testing services. The services being key integration points with external services, such as database, REST API endpoints, distributed file systems or third party SDKs.

Key pain points to keep in mind while testing services are:

  • Mocking Payment Gateway ~ if the service writes to a third party endpoint, the challenge will be in intercepting the request and mocking a response. However, testing various scenario on a same endpoint may be a bit of an issue. There is a need to read data and decide when to send a particular kind of a response, for instance an error due to corrupt/missing data, etc. Another way is to stub the function that makes the third party endpoint.
  • Stubbing Database Read/Write operations and Mock Responses
  • Mocking Response from Third Party REST APIs and Systems
  • Mocking Publish to redis Database


The modularization of the service layer may be a bit tricky. Some services may export objects, instance of a class, others may export classes. However, It is possible to have both.

As in previous other examples, a service file exports either an instance, a class or both. The index in services directory exposes all services in the service directory.

Sometimes unit tests helps to identify and mitigate circular dependency problems. Structuring the files in a way helps identifying relationships among classes can also help.


The key refactoring point is to make use of utility library when working with service classes. When possible, using injection technique makes it possible to mock third party dependencies at creation time.


This testing section will explore the following use cases

  • Stripe Use Case > Stunning stripe with sinon- using stub.yields
  • Database bound Service Use Case > Testing database builds on use cases we have seen in chapter about working with Models. Most techniques used there, either for modularization or mocking database instance or stubbing read-write functions.
  • redis Pub/Sub Use Case > Testing Pub/Sub builds on top of two concepts: testing asynchronous code and working with WebSockets. These two concepts have been discussed in depth in the following two chapters: working with Async callback and working with WebSockets.
  • Mailgun Use Case > It worth to mention that integration with third party services takes two, but complement, steps. The step number one is to dedicate a library that will initialize the third party service. The reason behind this measure is to make it easy to mock the library itself. The second step is group operations — business logic that the service provides — in one class. That class is a service.
    > Testing Mailgun's .send() with Mocha and Sinon

Key Takeaway

If there is a modularization technique that makes sense, adding a service layer beat them all.

  • The service layer decouples most of business logic away from the remaining of the application.
  • The service layer is abstracts most of heavy lifting key areas such as database access, library integration or third party service integration. All that while keeping an easy testability profile.


This section focused on testing Services in isolation, with a focus on stubbing expensive functions, and simulate their results with our mocked data. It also introduced services, as a way to decouple business services scattered across Routes/Controllers/Models into one place where they can be tested in isolation.


It is hard to imagine a realtime application that doesn't use WebSocket at some point nowadays. The success of WebSocket is not only on its secure-able full duplex capabilities, but also an open standard that is supported in major, if not all, Web Servers and Web Browsers.

The WebSocket specification is an upgrade of HTTP protocol to support long polling full duplex message passing mechanisms.


This chapter introduces some techniques to test WebSocket communication from a server standpoint. We will explore tactics to avoid spinning up an actual WebSocket server. We will introduce the use of a queue manager to make inter-process communication possible, especially in a clustered environment.

We will explore strategies to write a testable code, good to run in environments that do not necessarily have an actual redis instance, database for session storage nor opens an actual WebSocket connection to a remote client. This is a classic problem to figure out when running tests in a continuous integration server.


To reduce the boilerplate, or re-inventing the wheel, socketio library will be used in the following example. Process communication is obvious in a single process, but passing messages between process requires a single source of truth that all processes subscribe to. redis key store will serve as our queue manager.

Since redis is in most of time coupled with WebSocket connections, for authentication and inter-process communication purposes, it makes sense to look at those two components at the same time.

var http = require('http'),
    hostname = 'localhost',
    port = process.env.PORT || 3000,
    app = require('express')(),
    authenticated = require('lib/middleware'),
    server = require('http').createServer(app),
    store = require(''),
    io = require('socketio')(server),
    redis = require('redis'),
    redisClient = redis.createClient(),
    subscribe = redisClient.sub;

//Listening on a port
// WARNING: app.listen(port) will NOT work here!

//Application Request Handler
app.get('/', function (req, res) {
  return res.status(200).send('Hello World!');

//sharing a middleware with express -- for example authentication middlware 
io.use(function(socket, next) {
    authenticated(socket.handshake, null, next);

//registering redis store adapter 
io.adapter(store({ host: 'localhost', port: 6379 }));

//reading messages on socket
io.on('connection', function (socket) {
  socket.on('message', function(payload){
    console.log(`Example SocketIO listening on port ${port}!`);

io.on('disconnect', function (socket) {
  subscribe.removeListener('any:messaging:channel', (channel, data) => {/** notifications */});
  subscribe.quit();//closing the redis channel 

Example: Code of a server using in [app-root]/server.jssource

The code sample in this chapter is taken verbatim from Server Code sample. Modularization in Server chapter has a focus on making the server code modular. In this chapter, the focus will be on the WebSocket with a support of a queue manager via a redis instance.


There is two sets of challenge namely: challenges related to WebSocket itself and challenges related to redis testability. The objective in tis chapter is to try to kill two birds with one stone.

First, let's break down key challenges found in current chapter's code sample

  • The code uses multiple component parts that make it hard to reason about. This has to change.
  • Having redis.createClient() everywhere, makes it hard to mock. We cannot control creation/deletion of redis instances since redis.createClient() are found throughout the project. You can not control quite easily creation/deletion of redis instances(pub/sub).
  • One way to solve the previous challenge is to create One instance (preferably while loading top-level module), and inject that instance into dependant modules


The modularization of WebSocket communication is two fold. The first aspect deals with the WebSocket library itself — We use socketio throughout this book, but techniques explained here may be used with other libraries as well. The second aspect adds a queue manager. A queue manager dependency arises from the need to make inter-process communication possible. JavaScript runs on a single thread. But is possible to run multiple processes spawning multiple independent processes off the main process — thread. This is a special case when using the cluster capabilities via nodejs SDK's cluster API.

Since each spawned process is an independent, the communication between those processes become impossible, unless those process share a point of exchange.

  • To avoid having redis initialization in more than one place, it makes sense to move initialization to special utility library. The WebSocket initialization will follow the same approach.

  • Initialization of WeSocket listeners and redis clients is prone to be replicated across the application. There will be a need to delegate initialization to two dedicated initialization libraries: util/redis and util/socketio.


From modularization recommendations stated above, The code sample will be refactored for testability on following order:

  • introducing initialization libraries for both socketio and redis.
  • ejecting initialization logic from server.js to util/socketio and util/redis
  • making these two libraries discoverable via util/index.js file, to complete the modularization of initialization libraries
  • initializing socketio and redisClient instance in server.js using these two newly created libraries
  • injecting socketio and redisClient instance into libraries that need to use those instances
  • grouping socketio and redis business logic into re-usable services. The end-result will be located in service/socketio and service/redis
  • injecting socketio and redis client instance into services

First things first, the following example shows how WebSocket initialization code can be ejected from server initialization file, and delegated to a utility module.

/**@param {Object<ExpressServerInstance>} server - server or express app instance*/ 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', function connectHandler(){ /**...*/}); 
  io.on('disconnect',function disconnectHandler(){ /**...*/});

Example: Modularization of socketio with an expressjs dependency in [app-root]/util/socketio

The example provided above may be the first step in decoupling WebSocket initialization, but not definitely the last step. In fact, handling events from the utility library adds complexity to the utility library itself. To go even further, even handling will be delegated to another module. The final initialization will have a following composition

 * Server created by the caller as: require('http').createServer(app)
 * @param {Object<ExpressServerInstance>} server - server or express app instance
 * @param {Object<ServeIOInstance>} sio Optional SocketIO instance
module.exports = function(server, sio){
  var io = sio || require('socketio');
  io = io.listen(server);
  return io;

Example: Better Modularization of socketio with an expressjs dependency in [app-root]/util/socketio

The Optional socketio instance parameter makes it possible to mock the socket all together. Injected socket instance takes precedence on locally initialized instance.

Express routes use SocketIO instance to deliver messages over WebSocket protocol to a bunch of connected clients. The client is not limited to clients running on browser side. Structure of a WebSocket initialization in the server file takes following shape:

var express = require('express'),
    app = express(),
    server = require('http').createServer('app'), 
    io = require('util/socketio')(server);
    /** The rest of the program as in Code sample*/

Example: Attaching Server to Modularized socketio in [app-root]/server.js

At this point, WebSocket is using same server instance with the rest of the application. The difference is the server was upgraded to be aware of WebSocket traffic at this time.

There is still a question at this point though: which layer is going to handle WebSocket events? To answer that, the remainder of the application that handles logic is located either in Controllers or Router. To make an abstraction about those two components, we are going to be using a fictitious lib/module library.

The lib/module/socketio will have tasks such as using existing server and io instance to handle authentication, and deal with events as they come in or go out. The code should also be testable in isolation as well. This ideas is expressed in the following lines:

var authenticated = require('lib/middleware'), 
    store = require('');
 * @param {Object<ExpressInstance>} app 
 * @param {Object<SocketIOInstance>} io
 * @param {Object<RedisClientInstance>} redisClient
module.exports = function(app, io, redisClient){
  //Using instance of redisClient to subscribe to 
  var subscribe = redisClient.sub;

  //Checking if app has registered middlware instance already 
  var isAppAuthenticated = app._router ? app._router.stack.filter(layer => layer && layer.handle && === 'authenticated'):false;
    app.use(authenticate);//makes socket.request.session available on the socket 
  //sharing a middleware with express -- for example authentication middlware 
  io.use(function(socket, next) {
      authenticated(socket.handshake, null, next);
      // or simply
      //autenticated(socket.request, socket.request.res, next);

  //registering redis store adapter 
  io.adapter(store({ host: 'localhost', port: 6379 }));

  //reading messages on socket
  io.on('connection', function (socket) {
     //socket.request.session available and same as in app object
    socket.on('message', function(payload){
      console.log(`Example SocketIO listening on port ${port}!`);

  io.on('disconnect', function (socket) {
    subscribe.removeListener('any:messaging:channel', (channel, data) => {/** notifications */});
    subscribe.quit();//closing the redis channel 

  return io;

Example: Modularizing WebSocket logic to enhance server ability to deal with WebSocket traffic in [app-root]/lib/module/socketio.jsSource: Checking if a middleware has been used

Obviously, piece of code in lib/module/socketio is a copy paste of the server.js code sample. The initialization of the server and WebSocket instance changes as in following code. It is clear that there is a new guy in the neighborhood: lib/module/socketio.

var express = require('express'),
    app = express(),
    server = require('http').createServer('app'), 
    redis = require('redis'),
    redisClient = redis.createClient(),
    sio = require('util/socketio')(server), 
    //attaching enhancing io with event handlers
    io = require('lib/module/socketio')(app, sio, redisClient);
    /** The rest of the program as in Code sample*/

Example: Attaching Server to Modularized socketio in [app-root]/server.js

It worth to mention that nodejs module loader caches already loaded modules. That mechanism provides a kind of singleton instance by default — which is positive. The side effect is that mocking the library will require additional hack.

Besides that, this enhancement looks better compared to code sample in initial Code section sample. There is still a room of improvement — not to mention remaining challenge. The redis instance initialized in server.js. That make hard to mock while testing. We can do better on that front.

Technically speaking, exporting ready to use redis instance can be achieved as in the following example:

  const redis = require("redis"); 
  const port = process.ENV.REDIS_PORT || "6379";
  const host = process.ENV.REDIS_HOST || "";
  module.exports = redis.createClient(port, host);

Example: Modularizing redis — creating a module that exports a client client in [app-root]/utils/redis.js

Since the node module loader uses cached instance, there is nothing we can change the first time the module is loaded. There is a workaround this feature though. We are going to let the server.js, or any other caller such as test instance, require redis from node_modules. And this instance will be injected into the utility library, which in return has to create a client. In code this translates into the following example.

  /**@param {Object<InstanceOfRedis>} redis - redis instance initialized by the caller*/
  module.exports = function(redis){
      const port = process.ENV.REDIS_PORT || "6379";
      const host = process.ENV.REDIS_HOST || "";
      return redis.createClient(port, host);

Example: Modularizing redis — creating a module that exports a client client with injected redis instance in [app-root]/utils/redis.js source

There are one or two things to notice about this strategy:

  • Initialization of the library is delegated to the caller.
  • The library doesn't have to know if a redis instance is the real one, or a fake. That makes it possible to add a Mocked instance into this library, or replace all together while testing.

A quick reminder on how the server file should be refactored, after moving redis initialization to its own initialization library

var express = require('express'),
    app = express(),
    server = require('http').createServer(app), 
    redis = require('redis'),
    //Using redisClient to initialize usable redis client 
    redisClient = require('util/redis')(redis),
    sio = require('util/socketio')(server), 
    //attaching enhancing io with event handlers
    io = require('lib/module/socketio')(app, sio, redisClient);
    /** The rest of the program as in Code sample*/

Example: Attaching Server to Modularized socketio and redis in [app-root]/server.js


The following test example showcases two key points that were made easier by the modularization process:

  • Simulating a WebSocket communication to avoid latency in tests, while reducing environment setup and dependencies.
  • Simulating inter-process communication, without the burden of having an actual multiple processes running. The inter-process communication is made possible by redis via a pub/sub mechanism.

To be successful while testing WebSockets, mocking redis pub/sub system becomes a must.

  • When the application is using redis(local or remote)
  • Multiple tests stress the redis server(local or remote)
  • Mocking the redis interaction makes app run faster, and reduces friction caused by network
  • Make it possible to run without spinning up a redis server.

There are more than one way to go with mocking. I have to preview 3 libraries and choose one the fits better my needs. Some of libraries are: rewire,fakeredis, proxywire and plain old sinon.

  • Using rewire
var Rewire = require('rewire');
//module to mock redisClient from 
var controller = Rewire("/path/to/controller.js");
//the mock object + stubs
var redisMock = {
  //get|pub|sub are stubs that can return promise|or do other things
  get: sinon.spy(function(options){return "someValue";});
  pub: sinon.spy(function(options){return "someValue";});
  sub: sinon.spy(function(options){return "someValue";});
//replacing --- redis client methods :::: this does not prevent spinup a new redis server. At this point the redis server is already up and running.
controller.__set__('redisClient', redisMock);

Example: Mocking redis with sinon spies in [app-root]/test/utils/index.js

  • Using fakeredis: Fake redis provides an thrown in replacement and functionalities for redis's createClient() function.
var sinon = require('sinon'),
    assert = require('chai').assert,
    redis = require("redis"),
    fakeredis = require('fakeredis'),
    //additional variables 
    users, client; 

describe('TestCase', function(){
    sinon.stub(redis, 'createClient', , fakeredis.createClient);
    client = redis.createClient(); //or anywhere in code it can be initialized


Example: Using Spied Upon redis functions in [app-root]/test/utils.spec.js

  • Using redis-mock

The goal of the redis-mock project is to create a feature-complete mock of redis_node, so that it may be used interchangeably when writing unit tests for code that depends on redis

  • Using proxyquire

The​ goal of proxyquire is quite similar to the one of redis-mock

Since individual redis and socketio creators are in their own libraries — mocking objects created by those libraries becomes as easy as creating an empty object. That was the objective of this chapter.

Key Takeaway

Amongst many issues addressed in this chapter, the following key points are important to keep in mind while working on a real-time application:

  • It is possible to use session middleware between Socket.IO and Express.
  • Multiple tests, running in parallel or in sequence, stress any redis server — local and remote alike. Mocking the redis pub/sub makes app run faster, and reduces friction caused by network. Being successful at avoiding to spin up an actual redis server, is even better.
  • Grouping stub utilities in a module maximizes code re-usability across unit tests.
  • Initializing socketio via a utility makes it relatively easy to mock, while making it possible to validate implementations involving inter-process communications.
  • Leveraging modularization makes it easier to simulate inter-process communication. Mocking read-write operations normally done by a redis instance, makes sure our tests can be run in environment that do not necessarily have an actual redis instance running.
  • Modularized middleware make it possible to simulate session, and session sharing between communication protocols: HTTP and WebSocket.


Testing WebSockets can be tricky –– in this section we demonstrated how modularization can help to break down the larger problem into smaller, easily testable chunks. Mocking the remote server, request and response also made it possible to improve test response time.


Readings on other people's questions

Going down the rabbit hole ~ The good way to learn is ask questions, or answering others questions. Some of questions people ask about High Volume, low latency difficulties node/pub sub/redis, examples using redis-store with, Using redis as PubSub over Socket.IO” and Modularizing with express 4, nodejs databases: using redis for fun and profit

The second part of other people's questions

Going down the rabbit hole ~ By reading following articles about structuring your NodeJS application: Building a Chat Server with node and redis – tests and Bacon.js + Node.js + MongoDB: Functional Reactive Programming on the Server

Readings on mocking redis

Going down the rabbit hole ~ The first redis mocking library I looked into was redis mock. You may find it interesting, if not useful in your case. Rewire provides another alternative rewire ~ Easy monkey-patching for node.js unit tests. proxyquire ~ Proxies nodejs require in order to allow overriding dependencies during testing., Faking redis in Nodejs with fakeredis , Testing Socket.IO with Mocha, Should.js and Socket.IO Client, Sharing session between Express and SocketIO, Faking redis in Nodejs with fakeredis a tutorial, Mock redis Client, then stub function with sinon ~ rewire

The last batch of rabbit holes ;–)

Going down the rabbit hole ~ Testing Socket.IO with Mocha, Should.js and Socket.IO Client and Sharing session between Express and SocketIOManaging modularity and redis connections in nodejs

WebSocket Endpoints

Going down the rabbit hole ~ – Testing Socket.IO with Mocha, Should.js and Socket.IO ClientSharing session between Express and SocketIO

Background Jobs

The background jobs, also called scheduled tasks, are the scripts that run in a timely fashion, most of the time in a thread other than the main execution thread.

There are two kinds of background jobs discussed in this section: jobs scheduled via queue manager and workers running a separate process than the main process.


A common use cases of a background job is Scheduled Jobs. Job Queue Managers will be abstracted and simulated by mocked objects, to make things a little bit easy. This section lays grounds to build and test Scheduled Jobs.

One of libraries available in JavaScript community to schedule jobs is Agenda. The test example will use this library, but the overall philosophy is technically the same, with any other choice.

At the end of the section, there are other examples to choose from. That choice is going to depend on project scope, or use case. The choice is not cast-in-stone but based on project requirements.


One of the requirements popular with SaaS or cloud native applications is the ability to send delayed emails. The trigger that initiate scheduling an email ranges from new registrations, to attempt to delete an account.

The following code represents a use case where a new user registers with our service. We will send a welcome email right away, and schedule a follow-up email in next 24 hours. The code is rusty, and will be in our guinea pig, a.k.a server.js, that we will then attempt to modularize for testability.

//Job trigger can be used with routes as in following example
var app = require('express')(),
    User = require('./models/user'),
    EmailService = require('./util/email'),
    Agenda = require('agenda'),
    agenda = new Agenda({ /** configurations */});
//jobs definition
agenda.define('registration email', function(job, done) {/* ... more code*/});
agenda.define('user onboarding email', function(job, done) {/* ... more code*/});

//route processing'/users', function(req, res, next) {
  new User(req.body).save(function(err, user) {
    if(err) return next(err);
     //@todo - Schedule an email to be sent before expiration time
     //@todo - Schedule an email to be sent 24 hours
     //This triggers a task to send registration email right away.'registration email', { userId: user.primary() });
     agenda.schedule('in 24 hours', 'user onboarding email', {userId: user.primary()});
     return res.status(201).json(user);
//more routes and other spaghetti code
app.listen(port, function(){
   //registering the job somewhere when the server starts 

Example: Defining and using Jobs in [app-root]/server.js

A quick example of how this can be integrated in an existing application may looks more like the following. This next source code is provided for illustration purposes, but not for testing. The reference to test such an example can be found in Route/Controller section.

//Job trigger can be used with routes as in following example
var app = require('express')(),
    User = require('./models/user'),
    agenda = require('./scheduler/agenda');'/users', function(req, res, next) {
  new User(req.body).save(function(err, user) {
    if(err) return next(err);
     //@todo - Schedule an email to be sent before expiration time
     //@todo - Schedule an email to be sent 24 hours
     //This triggers a task to send registration email right away.'registration email', { userId: user.primary() });
     agenda.schedule('in 24 hours', 'user onboarding email', {userId: user.primary()});
     return res.status(201).json(user);

app.listen(port, function(){
   //registering the job somewhere when the server starts 

Example: Sample to schedule a Job [app-root]/server.js


The challenges testing the example of this chapter do not only come from bad code structure, but also from the number of libraries it integrates. To make our task easier, we will need to do the following:

  • Break the big components into smaller components ~ we refer to this refactoring process as modularization.
  • It would be hard to mock agenda after it was already loaded and initialized. However, it would be much easier to mock an instance that we control — or even replace the whole library all together.
  • The code's purpose is to make sure the right jobs are ready(tested in isolation). Keeping job definition in routes prevents achieving that. To remedy this, modularized job definitions will have to be in their own libraries, and tested before integrating the routes.


Agenda made the cut based on its ability to schedule tasks using human readable instructions, being able to persist jobs in a mongodb instance, and having a transparent API. In fact, testing with library feels the same way as testing a callback. Kue may be another library that was considered.

The following actions can be taken to make the code more approachable.

  • To make it possible to mock agenda when time comes, it makes sense to introduce a library responsible to initialize the library. For simplicity reasons, we will refer to this library as util/agenda. The sample code and initialization is provided in the refactoring section.
  • The service layer has already its own library. Since the example is using the email, we will refer to this library as service/email.
  • For simplicity reasons, util/index.js exposes all libraries implemented under util directory for easy access. Likewise, service/index.js will expose all services implemented under service directory.


The modularization made it possible to define a file structure home to new jobs definition library. That library however, does not decide how the Agenda instance is going to be initialized. This makes it possible to mock agenda instance easily.

A typical job definition interface looks as following

  agenda.define('registration email', function(job, done) {/* ... more code*/});

Example: Sample of a Job [app-root]/jobs/email.js

The previous job definition can be attached to any agenda instance. That is what this refactoring introduces. Any code may initialize an Agenda instance and pass it over to email job library, which in return will attach job definitions. This approach makes it possible to mock the agenda instance from any testing code, or in isolation.

var EmailService = require('./util/email'); 
var User = require('./models/user.js');

/**@param {Object<AgendaInstance> agenda} - Object having define as a stub candidate*/
module.exports = function(agenda) {
  agenda.define('user onboarding email', function(job, done) {
    User.findById(, function(err, user) {
       if(err) {return done(err);}
       	var message = ['Thanks for registering ',, 'more goes here somehow'].join('');
      	return new EmailService(, message).send(done);

  // More email related jobs
  agenda.define('registration email', function(job, done) {/* ... more code*/});
  agenda.define('reset password', function(job, done) {/* ... more code*/});

Example: Sample of a Job [app-root]/jobs/email.js

To curb efforts spent on testing alone, small chunks of functionalities can be moved to independent libraries. Only direct dependencies that those libraries need, can be tightly coupled, as the last resort.

var EmailService = require('service/email');
/*@param {Object<Agenda>} agenda - instance of agenda initialized by the caller*/
module.exports = function(agenda){
    //using tightly coupled EmailService here.

Example: Modularizing Job with Services in [app-root]/jobs/email.js

Injecting decoupled agenda makes it possible to easily test the task in isolation, without even needing to import actual agenda package into the project. One way to initialize the Job Scheduler, may also be to use a specific module.

/**@return {Object<Agenda>}*/
module.exports = function(){
    var Agenda = require('agenda');
    return new Agenda({/*configurations*/});

Example: Modularizing Job in [app-root]/util/agenda.js


The number of dependencies is one of other factors that makes testing a bit complex. The default case includes Mongoose Model, but can also include sending emails or scheduling jobs.

The following test example showcases three key areas that were made easier by the modularization process: mocking agenda or any other queue task manager, mocking email and any other service that jobs may be needing to use, simulating database reads and writes.

Testing the scheduled task that will send user onboarding email x hours after the registration involves multiple moving parts.

The program has to read user information from a database and certify authenticity and permission to receive an email. The program has leverage a third party email processing capabilities. Since we run multiple tests a day, hitting the providers servers every time a test runs would be irresponsible. So we take that possibility out by mocking requests and responses to the email server. Finally, it may take hours before we certify that the job can actually deliver on sending the actual email. The consequence is to assume that the job will do its work, and test instead if our EmailService is able to be called when the time comes.

The following example depicts every aspect described in previous paragraph.

//Things to test
var agenda = require('util/agenda'),
    User = require('models/user'),
    EmailService = require('util/email'), 
    EmailSheduledJob = require('jobs/email')(agenda);

//Fixtures - remember that fixtures is a document, but exports are difined in same index file.
var UserFindById = require('fixtures').UserFindById,
    DefineAgenda = require('fixtures').DefineAgenda,
    SendEmailService = require('fixtures').SendEmailService;

//Helpers that help mocking 
describe('SendRegistrationEmail', function(){
    //making sure all stubs are restored after tests
        this.UserFindByIdStub = UserFindById(User);
        this.DefineAgendaStub = DefineAgenda(agenda);
        this.SendEmailServiceStub = SendEmailService(EmailService);
    it('works', function(){
        //Assertions goes here - there is nothing to start, the test just runs
        assert(User.findById.called, 'User::FindById was called');
        assert(agenda.define.called, 'Agenda::Define was called');
        assert(EmailService.send.called, 'EmailService::send was called');

Example: Testing Sending Email via a Scheduled Job [app-root]/test/scheduler/index.spec.js

A quick observer may have noticed that there are new elements in our testing code: stubbing EmailService.send() and User.findById() and of Agenda.define().

For a refresher, since the EmailService is already modularized and we can control how its initialized, the following fixture can easily be introduced.

/**@param {Object<EmailService> EmailService>} - Object holding stub candidate function*/
module.exports.SendEmailService = function(EmailService){
    return sinon.stub(EmailService, 'send', function(args){
        //replacement of the send function, executes and returns the callback passed to it
        return arguments[arguments.length -1](args); 

Example: Stubbing EmailService.send() in [app-root]/fixtures/index.js

Similarly, previous examples stubbing models provided a simpler way to avoid hitting the database while testing. Especially if tests are going to be executed in environments that do not necessarily have an actual database server, such as a Continuous Integration server.

var MockedUserData = require('fixtures/mocks/user');
/**@param {Object<MongooseModel> User} - Model having save() as a stub candidate*/
module.exports.UserFindById = function(User){
    return sinon.stub(User,'findById', function(){
       //save always returns Error + Mongoose Model Instance 
       return arguments[arguments.length -1](null, MockedUserData); 

Example: Stubbing Modularized User Model's User.findById() in [app-root]/fixtures/index.js

Last but not least, agenda initialization went according to the plan. There is modularized library that can be either used to make a real copy, or mocked all together. Whatever the choice we made at that time, stubbing the agenda#define() makes sure the program doesn't wait 24 hours before we check if the email was sent. The next stub bypasses that, while keeping the execution flow intact.

/**@param {Object<AgendaInstance> agenda} - Object having define as a stub candidate*/
module.exports.DefineAgenda = function(agenda){
    return sinon.stub(agenda,'define', function(job, done){
       //forward passed callback with original Job(or MockedJobData) 
       return arguments[arguments.length -1](job || MockedJobData, done); 

Example: Stubbing Agenda instance's define() function in [app-root]/fixtures/index.js

Modularization applies well to testing file structure as well. You may have noticed that all fixtures are imported from fixtures/index.js or simply /fixtures. The index.js plays a role of a gateway to fixture definitions. Most of testing related modularization is located in /fixtures directory.

Key Takeaway

A lot has been said in this chapter. The following key points are important to remember:

  • Grouping stub utilities in a module maximizes code re-usability across unit tests.
  • Initializing agenda via a utility makes it easy to mock.
  • Using modularized email via a service makes it possible to mock sending emails via a third party. In current example the email delivery service is mailgun, but any third party service can well be replaced.
  • Using modularized models, either via a service or standalone models makes it easy to stub function that would otherwise read or write directly to the database. Modular model stubs are also easy to share across test files that need to stub one or multiple read-write functions.


Breaking down the route into smaller, library like modules makes it easy, not only for testing purposes, but also for maintenance purposes. In case of a problem, isolated code tends to be easier to debug, than spaghetti code.

Following the strategy laid in this chapter can help to deal with a never ending list of requirements. The one that may be familiar, is to send a pushed notification when a messages to the administrator, about the status of the newly registered user, or status of the emails that has been sent or scheduled to be sent.


The addendum provides additional information that do not have enough materials at the moment –– but NOT because the subject doesn't matter that much. Quite the contrary.


This section provides an high-level overview on deployment, reducing deployment friction, reaching zero downtime, the choice of infrastructure, dealing with memory leaks and making good documentations.


A typical NodeJS deployment follows, in one way or another, following steps: – download source code using git, wget, npm or any other package manager of your choice – configure, or injecting, environment variables – symlink vital directories such as log, config, nginx config – restart any other dependent services the application needs to run, for instance database(mongodb, couchdb, etc.), data-store(redis, etc.) load balancers or web servers (nginx, etc.)
– restart application server

The following example depicts the above comments.

# Using Git to pull latest code
$ sudo git pull                 # or > git clone git-server/username/appname.git
# Using npm 
$ sudo npm install appname      # requires to have access to service on hosted package manager

# do manual or automated configuration here
# do manual or automated symlink here 

# restarting dependent services 
$ sudo service nginx restart    # nginx|apache server
$ sudo service redis restart    # redis server
$ sudo service restart mongod   # database server in some cases

# restarting application server
$ sudo service appname restart  # application itself, in our case: hoogy

# rollback(revert symlinking) when something goes awly bad here.

Example: Sample of CLI deployment Scripts`

PS: Above services are managed with uptime

Reducing the number of steps is a must while automating the whole process. If one of above steps breaks, it is better to have a rollback strategy in place. Tagging releases and using versioning while packaging application make the whole process even easier.

Reducing Friction

One way old way to reduce friction to achieve faster deployments, is to bundle application together with their dependencies. As a quick example, Java releases .jar|.war files, in which all dependency libraries are bundled into one executable software.

Rule of thumb “Build your dependencies into your deployable packages”

In JavaScript in general, and NodeJS in particular, most common tactic to reduce friction is to publish your application as npm package. In case you do not want to purchase yet another subscription, you still have alternative to host npm compatible package to github.

Down the rabbit hole ~ One way of reducing friction while deploying NodeJS application is by using containers. Getting started with Kubernetes and NodeJS can help you getting started with managing deployments with Kubernetes

Push to deploy

The push-to-deploy model, is yet another alternative to go to production often, faster and kind of safe. The push-to-deploy model, democratizes deployments procedures, and makes it easy to spot, fix and release new patches to fix issues relatively faster classic massive deployment.

The drill works as following, a push to live or master branch triggers code download on live server. A Post-receive hook detects end of download and runs deployment scripts.

If anything goes bad, the step to symlink and restart servers doesn't happen, hence preserving integrity of your application. In case everything works as planned, the symlink+restart servers step executes, resulting in a successful release and deployment. This process if commonly known as Continuous Deployment.

# Server Side Code
$ apt-get update  #first time on server side
$ apt-get install git #first time git install 
$ apt-get update  #updating|upgrading server side code

# create bare repository + post-receive hook 
# @link
# first time initiaalization
$ cd /path/to/git && mkdir appname.git
$ cd appname.git
$ git --bare init

# Post-Receive Hook
cd /path/to/git/appname.git/hooks
touch post-receive
# text to add in post-receive
>>>GIT_WORK_TREE=/path/to/git/appname git checkout -f

# change permission to be an executable  file
chmod +x post-receive

# Restart Services + Servers 

Example: Sample of Deployment Scripts with Symlinking

Git WebHook

Alternatively, but more advanced, push-to-deploy model may be used with WebHooks. WebHooks are lingua franca of web services. It provide means of sending command to remote instances, the same way REST works, but this time between machines.


Going down the rabbit hole ~ since this book is not about systems design, following are articles that may help understanding more on this feature 1) Continuous deployment with github + gith, 2) Setting up push-to-deploy with git – Rollback strategy

Build servers

The push-to-deploy model looks attractive, but comes with big risks. In a larger team, how do you guarantee safety of every deployment? One way is to run pre-push|commit tasks to analyse code quality. Some people developers may comply, and some others may go rogue. Needless to say, it may take time to update all sanity check scripts across the development team.

A centralized, platform and developer independent system that checks sanity and determine if code can integrate well with an existing system is hallmark. This is how build servers come into the picture. The build servers are servers tasked to receive release candidates, execute test and build tasks, green-light or red-light releases for production. In case a release has been green-lighted, the code continues to production(Continuous Delivery) or tagged(Continuous Deployment).

Build servers can also be referred to as Continuous Integration server, especially when their tasks go beyond building packages.


Going down the rabbit hole ~ With this non-exhaustive list of CI servers 1)Distelli, 2)Magnum, 3) Strider 4) Codeship and many more.

Zero downtime

NodeJS server, like any server indeed, may go down for various reasons. Even though this book doesn't focus on product maintenance, following ideas may, nevertheless, be good know. Some of reasons applications experience downtime may be detected using events such as uncaughtException, unhandledRejection or SIGTERM(UNIX termination signal). Same mechanism is applied when updating application code base, to achieve zero downtime while deploying latest version.

To recover from the failure, events stated above give a second chance to applications that leverages cluster api, to restart failing processes. The drill works as following: the master cluster process waits for SIGHUP (updates/code push) signal, and sequentially terminates old processes before it starts new child processes. You can find this gist useful.

Another more common way is to deploy to platforms such as Heroku, OpenShift, commonly known as PaaS(Platform as a service). Container based deployments such as Docker or Kubernetes also makes it possible to deploy new code with zero downtime. These platforms spin up new servers on every new pushed/cleared version, and provide rollback when as soon as any deployment fails.

Going down the rabbit hole ~ More resources that can help achieve zero downtime deployment:Reloading node with zero downtime, Setting up express with nginx and pm2 is another helpful blog post, Zero-Downtime automated Node.js deployment, Zero downtime redeploys, Deploying and Scaling Zero Downtime NodeJS application , you may also be amazed by Hardening node.js for production part 3: zero downtime deployments with nginx


Downtimes will always happen, however your system has been tuned to avoid these. The worst nightmare, is when you are not able to know on time that actually some sub-systems, at some extent whole systems, went down. This scenario is what monitoring agents are for.

The very rudimentary monitoring service, is to trigger email(notification/text message) catch certain events. 1) in code shows examples of possible events, 2) provides a typical event handler that can be re-used across events. In nutshell, NodeJS provides some events before killing the server. For there, It becomes possible to tap into those events, and trigger a notification to system administrator. Since the application may not recover from some of the events, it is wise to rely on third party messaging service to deliver such notifications. The triggerNotification function is using mailgun as an example.

  function triggerNotification(event){ mailgun.send({message}); }
  process.on('uncaughtException', triggerNotification); 
  process.on('unhandledRejection', triggerNotification);
  process.on('SIGHUP', triggerNotification);
  process.on('SIGTERM', triggerNotification);

Example: Handling Special kind of Events

Going down the rabbit hole ~ with Some third party services that can help know when something goes wrong are – UptimeMonitoring-dashboard


For practical reasons, and from customer standpoint, it is imperative that your application provides 90%+ uptime. One strategy to make zero downtime a reality, is to break down larger systems into smaller sub-systems. Smaller sub-systems may not necessarily translate into microservices. Installable libraries, also known as packages, are a good example of sub-system, same goes to frameworks.

For ease of large scale application maintenance sake, deploying smaller sub-systems to various platforms makes it possible to achieve zero downtime. Since this section offered rather raw ideas, I curated a reading list in next section, about infrastructure and achieving zero downtime.


Going down the rabbit hole ~ If you want to know more about infrastructure, and how to “deploy your site through Netifly and add HTTPS, CDN distribution, caching, continuous deployment”, you definitely should visit Netifly

Memory Leak

Managing memory leaks in JavaScript applications can be a daunting task, needless to say for NodeJS environment. For time being, this book doesn't provide tips on memory leaks, but rather provides curated list of articles that can help taming the beast:


Going down the rabbit hole ~ with a couple articles you can find more information on memory leaks in Nodejs 1)Hunting a Ghost – Finding a Memory Leak in Node.js ~ A RisingStack article, 2)Simple Guide to Finding a JavaScript Memory Leak in Node.js ~ an Alex Kras blog and 3)Tracking down Memory leaks in NodeJS – A NodeJS Holiday Season, 3) How to self detect a memory leak in node


Documentation is a vital tool to support code health over the long period of time. Good documentation makes sure knowledge is transferable easily to anyone who is going to work on your code in future. Automated tests are an integral part of knowledge sharing, when done right.

Some tools you can look into to keep in sync documentation and code changes are listed in following section


Going down the rabbit hole ~ With API documentation the easy way with Slate. Slate is like Swagger, but more sexy. To generate documentations based on code comments, DocumentationJS or jsdoc or docco can help you out.


The reading list and references

The references section gives credit where it is due. It is also a collection of resources that move forward the NodeJS ecosystem.

The reading list provides links to additional materials. These articles provide knowledge on testing from other developers perspective. That makes it easy for the reader to come up with own testing strategy.


developers do not have time to read each and every documentation out-there. There is technically a documentation for each library –– most time with hidden gems in them. Those documentations dictate the way each and every library works –– this reference provides the MUST reads

A list of additional important resources for testing nodejs applications.

Good to read


This section provided articles that may help better understanding –– while reducing the time spent on other library specific documentations. The trove of information also contributed while writing this book.


With a stress on simplicity, this book put an emphasis on modularization. Day-to-day developer challenges are broken down into smaller, manageable chunks. “Devide and Conquer” strategy streamlined testing efforts –– provided a deeper coverage that was not possible in first place.

There is support available in books and online for use cases that were not discussed in this book. The intention was to save time developers while, and I hope this book achieved just that.

Thank You.

Enter your email to subscribe to updates.