Simple Engineering

Simple Engineering ~ a Hoo.gy App Blog

Introduction

Testing NodeJS Applications is a compilation of patterns and hacks to test large scale nodejs applications. Even though unit tests are main focus, best practices to write, deploy and maintain quality code will be discussed as well.

Newbie and veteran alike, automating JavaScript tests tends to be rather intimidating. Practice makes perfect, just like in painting, constantly honing your craft with small daily improvments will yield tangible progress over the time.

Testing NodeJS Applications explores ways to embedd unit tests in your daily workflow, and gauge code quality improvments with help of reporting utility libraries. With focus on Routes, it breaks down ways test Authenticated Routes without falling into integration test trap. In same optics, Testing NodeJS Applications explores ways to approach Asynchronous code by mocking expensive constructs.

More advanced use cases explores mocking Models to avoid to spin up database instances. With techniques used with asynchronous code, to avoid to read/write to file system and other streams, we will have a look at common patterns testing Streams. On thirdy party libraries front, we will explore ways to test Services without hitting remote REST or WebSocket endpoints.

This article is undergoing heavy editing. More content are going to be added, removed or refined. If you have any questions or problems in your day to day work, and you need a help on that front, tweet me the problem @murindwaz, If I am not able to help, I will find someone who can!

Synopsis

While running tests, ideally you should not even think about it. It is supposed to just work. But that is not always the case. Things just break, and most of time, you may end-up fixing testing code, instead of fixing bugs, refactoring or adding functionality. The frustration that follows is one of reasons programmers skip writing tests, or drop automated tests all together. Main focus will hence be providing tips to spend less time fixing testing code, but adding more features, crushing more bugs, improving code quality.

Code just rots, this blog post approaches testing techniques of faily large and old nodejs application. It focuses on refactoring, modernization, of larger chuncks into smaller more manageable components and modules, and has a collection of tips that are not necessarily well documented elsewhere. This write-up tends to address aforementioned problems at least at the height of challenges I faced my current old codebases.

Motivation

The following are a couple of motivation that lead me to write yet another unit testing documentation.

  • As a developer, You always do the heavy lifting. But the best tools are hard to come by. When you can't find the right tools, you simply make them: that is the beauty open source spirit.
  • Now, I find most of nodejs/expressjs articles discussing Integration Testing. Even tough that is a good thing, it is not the right choice to test most parts of my codebase.
  • Quite often, I find myself digging internets for same issues. Why not bundle all findings into one single document where I can return to for my daily tasks?
  • I also find most JavaScript testing rossources opinionated, and barely scratch the surface just to get you started. Which is good to get started, but not enough for mature projects.
  • As you dive deeper into old code that need to be modernized though, or poorly tested, you may realize that available content is not matching your needs quite well.
  • Few are resources that address complexity that comes with large scale nodejs/express applications, in the same document. In fact an express application(server) may also start Cronjobs, coupled with socket.io(or websocket) for realtime application features, or serve a Stream of content from various Databases or third party sources. This ressource tends to bridge that gap.
  • It may therefore complements existing ressources you already read while testing your own application, but it won't replace them in anyway, nor promise you a miracle that right after reading it, somehow, your code will be green and bug free.

Disclaimer

In any case, for the best of knowledge, I will state where the code, idea or question came from. It will be accidental not to mention source of source code used throughout this book. That includes even when not asked by author. Some samples, or examples, may originate from popular QA sites such as StackOverflow. There are excerpt taken from Github documentations or library examples. Every developer blog, or tech blog, that inspired the result will also be mentioned, and contribution will be made clear. Hackers gists will also be referenced whenever it applies. Examples from my personal projects will not necesserily comply to this detail, for obvious resons.

Content

  • Objectives — Who, Why you may not need this mega tutorial
  • Setup — Making your testing environment suitable for work
  • Workflow — Task Automation Tools that helps with productivity
  • Project Layout — Conventions around layout of NodeJS/Express Projects
  • Modularization — Breaking down big components into smaller manageable modules
  • Configuration — Configuration tuning depending on environment
  • Utility Libraries — Starting from a blank slate
  • Async – Callback— Strategies to Test and Mock Callback Functions
  • Async – Promises— Strategies to Test Promise
  • Async – Streams— Strategies to Test and Mock Streams
  • Routes— Testing REST endpoints without hitting the database.(mocking authenticated routes)
  • Controllers — Testing Route Controller in isolation
  • Middlewares— Testing Express Middleware in isolation
  • Models— Testing Mongoose Models without hitting the database
  • Services — The need of a Service Layer in NodeJS
  • WebSocket— Testing WebSocket without hitting remote endpoints
  • Background Jobs — Testing and Mocking long running background Jobs
  • Servers — Strategies to test NodeJS and Express servers.
  • Adendum — More on maintaining large scale NodeJS applications
    • Versioning
    • Documentation
    • Memory Leaks
    • Infrastructure
    • Deployment
    • Zero Downtime
  • References
  • Reading List

Objectives

Build, test, deploy and especially maintain large scale legacy NodeJS applications is a daunting task. It takes discipline(unit tests, code reviews), structure and rock-solid processes to succeed in this endavour. The main objective is to document ways to mitigate some challenges while writing testable code.

Why Testing

Manual testing all features on large projects is teadious, and sometime not feasible. It worths to mention that You cannot guarantee the sanity of a piece of code, for that reason the whole system, unless it is thoughrly tested on every iteration.

Automated tests are a good way to remember how a bug has been resolved in past, therefore preventing same issues to happen in future. When well designed, they serve as garde fous when a piece of code is altered or removed.

It is always good to remember that test coverage doesn't guarantee bug free code. But rather, a memory of how issues have been resolved, therefore safeguarding same problem from happening again.

Last but not least, tests gives confidence while refactoring code. Indeed, test driven refactoring makes you refactor only when you are adding value.

What to test

Every piece of code written should be tested, in one way or another. The good way to start, is to test new code, when an addition is required. It takes time to write, refactor, and maintain old test cases. Just like buying insurance, it costs more but worths it.

For projects that lack good test coverage, paying off technical debts is a good start point. For legacy, or projects that lack tests, it is better to start small, on most used unstable parts of the project. Chances are, you will be working on those parts trying to fix issues anyways.

Ideally, test before writing code, not the other way around, and take it slow.

  • How to test routes without doing integration testing.

How to test

Contradicting ideas around software testing are more around the how, than the why.

There is no one way to write good test cases. There are common traits shared with all test cases: Test Case > Feature > Expectations . What goes into expectation define success, or lack thereof, of your program. Features on other hand, depend heavily on what the test case is about. In case Test Case is Class, Features are going to be methods/functions of the Class. In case Test Case is a Method, outcomes of various parameters are going to play the feature role.

Since test cases are not cast into stone, it makes sense to refactor them. Refactoring is not re-writing, but better re-organization, better documentation or grouping similar codeblocks into fixtures or test utility libraries.

Talking about the how, main program components are going to be tested in isolation. That is what Unit Testing is all about. By main program components, we mean Routes, Models, Controllers, Services and Servers.

To avoid Integration Testing Trap, anything that reads or writes to an external medium, will be stubbed. Stubbing is replacing the section that does the read, or the write, with a controlled function that mimics the read, or the write, behaviour. Expected data(response), will me mocked. Meaning a pre-programmed data structure + data of the response of the stubbed function.

There is a lot of discussions around Unit Testing. At end of the day, it is better to automate repetitive tasks, manual testing in our case, that is what programming is all about.

Going down the rabbit hole ~ Is TDD dead? This question had some programmers debating the need, or lack thereof, of testing your code the TDD way. Kent Beck's Facebook Note makes you wonder which replacement could be suitable, if any. DHH pinned down what looks like an obituary, but doesn't rule out automated testing. You can also learn from Uncle Bob(Robert C. Martin) why TDD doesn't work.

Pro — Unit Tests

  • Steer pre-release confidence.
  • Gives code refactoring confidence.
  • Prevents unexpected bugs
  • Help developers(new) understanding code.

Cons — Unit Tests

  • Take time to write
  • Increase learning curve.

Setup

This section discusses tips to make your testing environment for work. The choice of testing tools is not immune to heated discussion really around anything tech in development community. Tech people always find a way to go tribal. To avoid that, I will try to keep the message around concessus. We may not agree on tools to choose, but we always agree that we need some kind of tools.

Challenge

Choosing your tools

At this point, we agree that testing is good, whatever testing school you subscribe to. And at this point, we agreed to explore TDD or BDD as our school of testing, which accomidates people who believe in other forms of testing.

This section is about hints on things to consider while choosing your ideal testing framework. The wide range of variety of testing frameworks comes with hefty price: choice paralysis. While choosing your testing framework, following points will factor into your decision matrix:

  • Taste ~ not matter how framework Y is, you may just enjoy to use framework X. If you have no external constraints such as your boss, go ahead and use whatever you enjoy using. The zeal and love your tools, helps you becoming a craft master.
  • Learning curve ~ If you want to switch to another framework, and time is scarce, check the frameworks with less learning curve. If you have only a day, try to find something similar in structure and semantics like something you already know. There is nothing worse like being first alien on a planet of Martians.
  • Stability ~ How stable the testing framework is plays a big role in time you spend debugging your test code, versus doing actual work.
  • How easy to integrate in existing testing framework.
  • Community ~ The size of community using the framework, plays a big role in getting help you need. Things like documentation, solving framework bugs and sharing knowhow all depends on size and enthusiams of the community around a framework.
  • Openness ~ Some open source software are iron-fist-led. Involvment in evolution of the framework declines because of politics around the product development. You may well remember reason that lead a team of engineers to fork iojs off nodejs runtime. You want stability and push green builds, not bad politics.
  • Completeness ~ Some framework allows you to bring your own tools, others provides all in one solutions. Framework like Jest come with Spies, Mocking and reporting enabled. Others like Mocha, provides you with barebone framework, which makes it easy to plugin your additional tools as you wish. You may pick whatever makes sense to you.

Tools

The choice of tools used in this documentation, is for reference sake. I do not suggest you to stack them as described in this blog. But take thinking exposed here and apply to tools you already use, or familiar with.

  • Test Runner — Mocha
  • Test Reporter — Istanbul
  • Task Runner — npm scripts and gulp
  • Assertion Libraries — In addition to native Assert, I chose Chai as it comes with Should and Expect baked in it.
  • Mocking Libraries — Sinon, Library specific Mocks(httpMock, mockgoose, etc…)
  • Spy Libraries — Sinon(stub), Library specific Spies(sinon-mongoose, etc…)

  • Mocking Model Level

Going down the rabbit hole ~ Difference between mocha and _mocha. The above was resolved using following issues on github, Issue #262, Issue #496 and Issue #798, Source: unit test node code in 10 seconds, Source: Istanbul Cover, npm + Mocha —watch not accurately watching files

Workflow

Introduction

Everytime there is a code change, a chain of events happens before the code is certified as ready to go. These events ranges from refreshing web page if modification affects look and feel. Doing some actions such as posting a form, if the modification affects parts of business logic. Recompile assets (linting, minification), push changes to development server, when using a shared environment. Sometimes these tasks introduce manual repetition.

Workflow subject to this chapter, introduces ways to harmonize and orchastrate steps, as well as to automate steps execution. Task runners such as npm , grunt, gulp and a variety of build and transpiler tools play a very big role in this.

The problem is not lack of tools, but rather decision paralysis. This chapter provides tools to get started, the tuning will be based on individual preference and project requirements.

Challenge

When running global npm packages, npm becomes a problem. Global installation may produce problems with automated deployments, since there is no extra indication to automatically tell npm that package A is local, whereas Package B is global. To eliminate that ambiguity, making all modules local makes sense.

  • Sanity check/Integration tests for client facing endpoints.
  • Use plumber to log incidents while setting up tests
  • Instrumentation and test reporting (used gulp-coverage)
  • Be able to cover exceptions, missing data structures, etc.

Things you may take into account to customize your workflow:

  • Auto reload(hot reload) using:nodemon, supervisor or forever
  • Mocha test runner
  • Jasmine-node(shipped with Jasmine 2+)
  • [Supertest] a testing framework
  • [Nock] HTTP mocking framework

Supertest (wrote on top of Superagent), testing endpoints of REST API. Istanbul will be used to generate reports, as tests progresses.

# In package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# Then run the tests using 
$ npm test --coverage 

Going down the rabbit hole ~ If you want to know more, there is a blog post: How to Solve the Global npm Module Dependency Problem that provided more solutions to this problem. Localizing some packages solves partially explosion problem.

Running gulp locally

Running this to a remote server requires to install manually a global version of gulp. Many applications, may require a different gulp version. Normal gulp installation:

npm install gulp -g # provides gulp to cli/terminal 
npm install gulp --save-dev # provides 

After gulp installation, the command becomes available system-wide. Installing package system wide may not be ideal, especially when you have quite a number of them. The package configuration does not have a flag to tell which package should be installed globally. Its configuration suggests all package are installed locally, local being a reference to actual project. It is possible to run any package from locality perspective, by levelaging .bin executables localed under node_modules. The following configuration allows you to have decide to run a local version of gulp.

"scripts": {
  "gulp": "./node_modules/.bin/gulp" 
}

PS: Using ./node_modules/.bin/gulp forces gulp to run the local version of gulp, instead of global version of gulp.

How to use npm and gulp

  • $ npm run gulp will use scripts > gulp version.
  • Conversely, adding ./node_modules/.bin/ to local PATH, make package available system wide.
  • Auto reload(hot reload) using: nodemon, supervisor or forever

  • Choose a test runner, Mocha is my choice but Jasmine-node can do it too. Jest looks like a good alternative to test node too.

  • Jasmine-node(shipped with Jasmine 2+)

  • Jasmine (chai+sinon, or node assert) have assertions, behaviours or TDD

  • [Supertest] an integration testing framework

  • Nock HTTP mocking framework

  • Supertest (wrote on top of Superagent), testing endpoints of REST API.

  • Istanbul reporting tool

  • Clock on [Tools for unit testing and quality assurance]

Istanbul will be used to generate reports, as tests progresses.

# in package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# then run the tests using 
$ npm test --coverage 

Running Mocha test without gulp

  • Some of most important steps, is to get your tests run in watch mode, and execute proper reporting. This section is going to cover just that, plus a couple tweeks that can save you a day or a week.
  • While searching for a task runner, stability ease of use and reporting capabilities come first.
  • Mocha might be easy to get started, but the drawback of choosing it: over-engineered.
  • Istanbul Coverage is added using local istanbul and local mocha, on test section.
{
  "test": "mocha -R spec  test/**/*spec.js",
  "test:compile": "mocha -R spec --compilers js:babel/register test/**/*spec.js",
  "watch": "npm test -- --watch",  
  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js"
}

The following produces no coverage information, and exits without writing coverage information.

{
  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js" 
}
  • When using istanbul cover mocha – Error: “No coverage information was collected, exit without writing coverage information”
  • To avoid the above error, and have reporting, use instead istanbul cover _mocha version

  • The code used to test current iteration on my private projects is:

$ ./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/_mocha -- --reporter spec  test/**/*spec.js

Attaching Chai, Sinon end Expect to global Object.

  • There multiple ways to go with this approach, but the most compelling is using exports.
  • This approach won't make a libraries default, but will help reducing boilerplate while testing.
    var chai = require('chai');
    module.exports.chai = chai; 
    module.exports.sinon = chai.sinon; 
    module.exports.expect = chai.expect; 

Down the rabbit hole ~ You are not alone if you have been wondering How to add global variables used by all tests in Javascript?. You just have to remember, you may expect problems such as: Issue#86 about adding should on global object or Issue#891 How to make expect/should/assert be global in test files and be able to pass eslint

Project Layout

Before diving into mechanics of testing, let's look at possible layouts(main components) available in a typical NodeJS project.

  • Configurations
  • Utility
  • Controller
  • Routes
  • Model
  • Service

Going down the rabbit hole ~ If you want to know more about structuring a NodeJS project, feel free to check this Example Project Structure

Modularization

divide et impera

Large code bases tend to be hard to maintain compared to smaller ones. Obviously, NodeJS applications are no exception to this. Updates in 3rd party integrations, evolution of language or libraries are some of reasons you will be reworking your codebase for time after time.

The large aspect of large scale application combines Lines of Code(20k+ LoC), number of features, third party integrations, and the number of people contributing to the project. Since these parameters are not mutually exclusive, a one person project can also be large scale, it has to have a fairly large lines of code involved, or a sizable amount of third party integrations.

Devide and conquer is one of old Roman Empire Army technique to manage complexity. Deviding a big problem into smaller maneagable ones, allowed the Roman Army to conquer, maintain and administer a large chunk of known world in middle age.

Modularization is one of techniques used to break down a large software into smaller maleable, more manageable components. In this context, a module is treated as smallest independent composable piece of software, that does only one task. Testing such a unit in isolation becomes relatively easy. Since its is a composable unit, integrating it into another system becomes a bleeze.

Modularization is achieved by leveraging the power of module.exports(export in ES7+). Modules comes in function, objects, classes, configuration metadata, initialization data, servers, etc.

If you want to dig deeper, feel free to read Export This: Interface Design Patterns for Node.js Modules Alon Salant, CEO of Good Eggs and Node.js module patterns using simple examples by Darren DeRider aka @73rhodes

Configuration

Modularize nodejs application configurations

12 App Factor suggests to manage configuration as code. That makes it fast to deploy application anywhere, with less file struture change. In most of my applications, configurations are stored on a machine file server, at /etc/config/[app-name]/config.ext. This works. I realized that this may be a problem to set up a new dev machine. It is better to move this configuration inside the code, ideally at root: [app-root]/.conf, etc

Going down the rabbit hole ~ Other people who worked on same problem: How to store Node.js deployment settings/configuration files?, Managing config variables inside a Node.js application, Configuring Node.js Web Applications… Manually || Convict.js

Managing Configuration files

This section gives tips on How to manage configuration files. It also gives an example of how to test existance of configuration keys.

Ideal case where every programmer can to deploy latest code to a conviened environment, most of the time: staging. In some ways, democratizing deployments, also gives developers access to some sensitive data, authentication data for instance.

Storing production keys? In most cases, different teams share directories via revision systems. How can we manage configuration data as a part of program, giving access to developers ability to work with code, but limiting access to production ready configuration keys?

Configure Socket.io + Express app

server{
  # more configurations go in this place ...
  location /{
      # 3 lines to serve websockets
      # More configurations go in this place ...
      proxy_http_version 1.1; 					# Line Nmber 1
      proxy_set_header Upgrade $http_upgrade; 	# Line Number 2
      proxy_set_header Connection "upgrade"; 	# Line Number 3
  }
}
  • Tells nginx which version to upgrade to
  • Tells nginx to upgrade http to version 1.1
  • Tells nginx to upgrade upon receiving socket flash request

Going down the rabbit hole ~ Source: Chriss Lea's – Proxying WebSockets with Nginx

Utility Libraries

Introduction

The utility libraries are a good place to start testing, if these two conditions are met:

  • You are tasked with a new large scale legacy project, that counts virtually zero Unit Test cases. The code rotted at a point you are afraid to add even a comma on first file you open.

  • You have no requirments(features, bugs) that requires your immediate, but expecting new requirements to land on your desk in a week or two.

The reason utility library seems a good starting point

Code

//util/index.js
//Utility to format User name.
module.exports.formatUser = function(data){
  return Object.assign({}, {
      first: data.first, 
      last: data.last, 
      full: [data.first, data.last].join(' ')
  });  
};

Test

describe('util#formatUser()', function(){
    it('returns first, last and full name', function(){
        
    });
});

Conclusion

The reason utility libraries seems a good place to start with, lyies in fact that even the worst project have them, are isolated from the rest of the code in most cases, and are relatively easy to read.

Async — Callbacks

This section covers strategies to test and mock callback functions in isolation. Loading some dependencies required to test callbacks. Sinon library provides Spies, Stubbing and Mocking tools. Chai library provides Assertion library. It is possible to rely on NodeJS native assertion library, but if you need more tools such as assert.isAtMost or assert.deepEqual then you can add Chai dependency in your test toolkit, otherwise, you don't really need it.

//in any.spec.js
var fs = require('fs');
//testing utilities
var sinon = require('sinon');
var assert = require('chai').assert;

Like in other tests, the structure of the test looks as following

//in any.spec.js
descibe('fs', function () {
    afterEach(function(){/***/});
    beforeEach(function(){/***/}); 
    //other describe and it constructs
});

As an example, the idea is to replace fs.unlink with a stub, coupled with a spy that we can check if a file that should be deleted indeed has been deleted. This test makes sense in a way that you don't actual files to be deleted from file sytem while testing. Not only becuase hard drive I/O cost more, but also since you don't want to delete some files buy accident.

afterEach(function(){
   fs.unlink.restore(); 
   this.unlink.restore(); 
});
beforeEach(function(){
   this.unlink = sinon.stub(fs, 'unlink', function(filepath){ return true;}); 
});

The function that deletes a file has to take a callback. callFunctionThatDeletesFiles describes such a function. To make sure the test executes to the end, done callback is added to the test. Sometimes these kind of tests end with a timeout errors, then you have to debug and understand why callFunctionThatDeletesFiles is not able to execute passed in callback.

// Somewhere in your code. 
describe('unlink()', function(){
    it('removes a file', function (done) {
        callFunctionThatDeletesFiles(function next(){
        	assert(fs.unlink.called, "unlink() has been called");
            done();
        });
    });
});

Callback hell and how to tame the dragon

  • next()
  • Move most operations from middleware to Promises
  • Reduce read/writes from middlewares

Async — Promises

Introduction

This section covers strategies to test and mock Promise constructs. The idea discussed in this section, is to replace the function that makes external request by a Stub. The stub has to return either a Promise with a mocked response, or simply a Mock of Resolved Response.

Code

Let's consider a simple form of Promise construct. It uses Fetch API, but variations can use

//Lab Pet fetches data from a url
window.fetch('/rest/api/endpoint/url').
then(function(response){ 
    new Service().doSomethingWith(response); 
    return response; 
}).
catch(function(error){ 
    new ErrorHandler().doSomethingWith(error);
    return error;
}); 	

Test

The “function that makes external request” is fetch. Replacing fetch with a stub, allows next async functions defined in .then and .catch constructs to continue with execution. There are various ways to deal with such a situation, dependening on how deep tests have to cover. Some of those techniques are examined in following test examples. Before that, let's examine the structure of test cases.

The first section of test case, involves dependencies needed to make this test a success.

var sinon = require('sinon');
var bakedPromise = require('./fixtures/baked-promise'); 
var mockedResponse = requires('./fixtures/mocks/mocked-response');

The second section of this test case, shows how the test case is organized.

//in any.promise.spec.js
descibe('GET /url', function () {
    afterEach(function(){/***/});
    beforeEach(function(){/***/}); 
    //other describe and it constructs
});

In all cases, you will need to restore stubbed fetch function. I always like to start with After/AfterEach block, so that I don't somehow forget to add it.

afterEach(function(){ this.fetchStub.restore(); });

One way to approach mocking a response, is to return a plain simple Promise. The other, similar way, to replace fetch with a stub that returns a Promise. The last, is to rely on Promise baked in Stubbing utility. Those three ways are expressed in following beforeEach snippet.

beforeEach(function(){ 
    //one way: return a baked promise
    this.fetchStub = sinon.stub(window, 'fetch').returns(bakedPromise(mockedResponse));
    //other way: stub fetch with a function that returns a baked promise
    this.fetchStub = sinon.stub(window, 'fetch', function(options){ 
        return bakedPromise(mockedResponse);
    });
    //yet other way: using stubbing utility that resolves to a promise
    this.fetchStub = sinon.stub(window, 'fetch').resolves(mockedResponse);
});

You may have noticed above stubbing are expecting cases where the function is supposed to succeed. But what can you do, when you are tasked to check if right error handlings are being executed? That is where failure test cases come in. You can always to group failing test cases in one suite, or re-initialiaze stubs case by case. The following lines displays some ways you can do it.

beforeEach(function(){
   //one way
    this.fetchStub = sinon.stub(window, 'fetch', function(options){ 
        return bakedFailurePromise(mockedResponse);
    });
    //another way: using 'sinon-stub-promise's returnsPromise()
    //PS: You should install => npm install sinon-stub-promise
    this.fetchStub = sinon.stub(window, 'fetch').returnsPromise().rejects(reasonMessage); 
    //same way: without sinon-stub-promise is possible for sinon version >= 2.0.0
    this.fetchStub = sinon.stub(window, 'fetch').rejects(reasonMessage); 
});

Finally, the actual testing may look something like one of the following:

it('works', function(){
    //use default function like nothing happened
    window.fetch('/url').then(function(response){
        assert(this.fetchStub.called, 'fetch() has been called');//or 
        assert(window.fetch.called, 'fetch() has been called');
    });
});
//source: http://jonnyreeves.co.uk/2012/stubbing-JavaScript-promises-with-sinonjs/
//source: https://templecoding.com/blog/2016/02/29/how-to-stub-promises-using-sinonjs/
  • bakedPromise() is any function that takes a Mocked(baked) Response and returns a promise
  • This approach doesn't tell you if Service.doJob() has been exected.

Going down the rabbit hole ~ Stubbing JavaScript Promises with SinonJS ~ on Johny Reeves' blog

Async — Stream

This section is about testing read, write and duplex streams.

Primer

  • Readeable Stream can be as easy as fs.createReaderStream(filepath)
  • Writable Stream can be as easy as response in express function(req, res, next){}
  • Piping two streams channels data from one stream(readable) to another(writable) readable -> writable
  • Two ways streams(Readable and Writable), are most of time designed to make transformation, whence transformers are duplex streams.
  • Piping becomes as readable -> transformer -> transformer -> transformer -> writable

  • Transformer Stream class looks a bit like:

const inherit = require('util').inherits;
const Transform = require('stream').Tranform;
function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    this.options = Object.assign({}, options, {objectMode: true});
    //<= re-enforces object mode chunks
    Transform.call(this, this.options);
}
inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};	
MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};	
  • Isolation of the above function:
it('_transform() - works', function(){
    var Readable = require('stream').Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, 'push', function(data){
        assert.isNumber(data.id); //testing data sent to callers. etc
        return true;
    });
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    rstream.pipe(tstream);
    expect(tstream.push.called, 'push() has been called');
    mockPush.restore(); 
});

Going down the rabbit hole ~ Check glob to know more about using Glob Stream to initialize all files came in as a stream, How to TDD Streams, Testing with vinyl for writing to files

Stubbing

  • How do Stubbing differs from Mocking
  • How to Stubbing differs from Spying: Spies/Stubs functions with pre-programmed behavior
  • How to know if a function has been called with a specific argument?
    • For example: I want to know the res.status(401).send()

Test — How to Stub Stream Function and Mock Stream Response Objects

    describe('', function(){
        
    });

Test — How to Sub Stream Function and Mock Stream Response Objects

  • The general structure of a stream processing program(server or client):
    var  gzip = require('zlib').createGzip();//quick example to show multiple pipings
    var route = require('express').Router(); 
    //E.g: express 
    //getter() reads a large file of songs metadata, transform and send back scaled down metadata 
    route.get('/songs' function getter(req, res, next){
            let rstream = fs.createReadStream('./several-TB-of-songs.json'); 
            rstream.
                pipe(new MetadataStreamTransformer()).
                pipe(gzip).
                pipe(res);
            //handling errors in the pipes => next handles error to next handler     
            rstream.on('error', (error) => next(error, null));
    });
  • How to test the above code: small pieces.
  • gzip and res won't be tested, but stubbed and returns a writable+readable streams
  • MetadataStreamTransformer will be tested in isolation
  • MetadataStreamTransformer._transform() will be treated as any other function, except it accepts a stream
  • new MetadataStreamTransformer() won't be tested, but stubbed and returns a writable+readable stream
  • fs.createReadStream won't be tested, but stubbed and returns a mocked readable stream
  • .pipe will be stubbed, and returns a chainable stream.
  • rstream.on('error', cb) Stub readable stream with a read error, spy on next() and check if it has been called, on write error.
  • Mocking fs.createReadStream to return a readable stream
    //stubb can emit two or more streams + close the stream
    var rstream = fs.createReadStream();
    sinon.stub(fs, 'createReadStream', function(file){ 
        //trick from @link https://stackoverflow.com/a/33154121/132610
        assert(file, 'createReadStream() received a file');
        rstream.emit('data', "{id:1}");
        rstream.emit('data', "{id:2}");
        rstream.emit('end');
        return false; 
    });

    var pipeStub = sinon.spy(rstream, 'pipe');
    //Once called this above structure will stream two elements: good enough to simulate reading a file.
    //to stub ```gzip``` library: another transformer stream: producing 
    var next = sinon.stub();
    //use this function| or call the whole route 
    getter(req, res, next);
    //expectations follow: 
    expect(rstream.pipe.called, 'pipe() has been called');
  • What is the difference between readable vs writable vs duplex streams? Substack Stream Handbook
  • Readable produces data that can be feed into Writable stream => has readable|data events + extends by implementing ._read
  • Writable can be .piped to, but not from(e.g: res in above example). => has writable|data events + extends by implementing _.write
  • Duplex: Goes both ways: Transformer stream is duplex. Has both events + extends by implementing ._transform

Going down the rabbit hole ~More on readable streams(Stream2), QA: Mock Streams, Mock System APIs,Streaming to Mongo available for sharded clusters

Test — How Stubbing HTTP request works

  • When to use this:
    • Testing all routes
    • Making assertions about the nature of response returned(utilities included)
    • Server internally provided, and booted on demand: so there is no need to start the base server.
  • When not to use this:
    • While running integration testing with a need to hit the database.
  • Using a Mocking library such as node-mocks-http makes sure to pre-program request/responses, with ability to test if expected functions/logic has been exectured along the way
  • Since Mocked Object created by such a library is a stream, you can also use it in piped streams context:
// Add promise support if this does not exist natively.
if (!global.Promise) {
    global.Promise = require('q');//or any other promise library 
}
var chai = require('chai');
var chaiHttp = require('chai-http');
chai.use(chaiHttp); //registering the plugin.
var app = require('express').Router();
require('./lib/routes')(app);//attaching all routes to be tested
//use this line to retain cookies instead 
var agent = chai.request.agent(app);
//agent.post()|agent.get()|agent.del()|agent.put() 
//initialization of app can be express or other HTTP compatible server.
it('works', function(done){
    chai.request(app)
        .put('/user/me') //.post|get|delete
        .send({ password: '123', confirm: '123' })
        .end(function (err, res) {
        expect(err).to.be.null;
        expect(res).to.have.status(200);
        //more possible assertion 
        expect(res).to.have.status(200);
        expect(req).to.have.header('x-api-key');
        expect(req).to.have.headers;//Assert that a Response or Request object has headers.
        expect(req).to.be.json; // .html|.text 
        expect(res).to.redirect; // .to.not.redirect
        expect(req).to.have.param('orderby');//test sent parameters
        expect(req).to.have.param('orderby', 'date');//test sent parameters values 
        expect(req).to.have.cookie('session_id');//test cookie parameters
    });
});
//keeping port open 
var requester = chai.request(app).keepOpen();
it('works - parallel requests', function(){
    Promise.all([requester.get('/a'), requester.get('/b')])
    .then(responses => { /**do - more assertions here */})
    .then(() => requester.close());
});

Going down the rabbit hole ~Stubbing HTTP Requests, Mocking Express Request/Response, HTTP Response assertions for the Chai Assertion Library

Models

Introduction

In this section is about testing model. By testing, I mean unit testing models in isolation, without hitting the database. Testing models while hitting database is known as Integration testing. Such tests can either be done to test scenario of data integrity, or via RESTful API integration testing. So that will not be covered here.

Since our premise is not to hit the database, the database server will not be needed. That alone increases dramatically a test runs from beginning to end. By that, we will Stub functions Mongoose functions supposed to hit the database, and Mock database response(data). In addition, some functions are going to be Spied upon, in case that requires to accert their execution.

Tools

Mocking Requests using [Nock]. Sinon Stubs to Simulate a response from Mongo::UserSchema::save() function.

Spy a Model, when a certain gets called(e.g: save), and use stubbed function. while stubbing a function,we can specify the function to call original callback.

Mock-All tools like Mockery, come with a challenge. When a test fails due to unhandled exception or rejection, after hook may not be able to de-register and reset to default functions, which may cause program disruption in some cases.

If Mocked out function changed behaviour of file system for example, failing to reset the function to its initial state, may break the whole system, Resulting in either rebooting the test cases or the whole system, depending on extent of the damage.

Going down the rabbit hole ~Mocking database calls by wrapping Mongoose with Mockgoose,StackOverflow response that works for stabbing, Getting started with NodeJS and Mocha, SinonJS – a Mocking framework, Mocking Model Level

Test — Mocking Database access functions

Functions that access or change database state, can be replaced by calls to functions spied upon, and call custom functions that may supply|emulate similar results.

There are a couple of solutions that can be used, one of them is sinon

//Model should be an actual model, eg: User|Address, etc
ModelSaveStub = sinon.stub(Model.prototype, 'save', cb);
ModelFindStub = sinon.stub(ContactModel, 'find', cb);
ModelFindByIdStub = sinon.stub(ContactModel, 'findById', cb);
      
//cb will be the callback to simulate realife function
function cb(fn, params){
 return fn.apply(this, arguments);
 //check if params is the one that has apply instead and apply it.
}

[Nock] library is used to mocking Requests. Sinon library is used to provide “Spy”-ies, Stubs. The stubbed function will use fixtures as expected outcome from a Mongo::UserSchema::save() function call.

Rule of thumb

  1. Spy a Model, when function is called(e.g: save).
  2. Use stabed function to simulate original function. it is possible to call original callback in a stabbed function.

The strategy is to stub the function that calls the database, and always make sure the async function, if any, continues the flow of the program. In case there is a value, or object|function, resulting from stubbed function a mocked value replaces expected function call outcome.

Going down the rabbit hole ~Mocking database calls by wrapping Mongoose with Mockgoose, StackOverflow response that works for stabbing, Getting started with NodeJS and Mocha, SinonJS – a Mocking framework, Mocking/Stubbing/Spying mongoose models, A TDD Approach to Building a Todo API Using Node.js and MongoDB

Test —  Chained Model Functions.

It is not so obvious to test such a code block:

Order.find().populate().sort().exec(function(err, order){ /** ... */});

Keyvan Fatehi managed to hack something amazing:

//Slight modification of original code
var promise = sinon.stub(Order, 'find').returns({
    populate: sinon.stub().returns({
        exec: sinon.stub().yields(null, {
            id: "1234553"
        })
    })
});

Test — Chained Model Function with Promises

What can happen if a promise is involved?

Order.find().populate().sort().exec().then(function(err, order){/** ... */});

There is a library that solved that problem, that can be added on top of Sinon. If Sinon is not a part of testing framework, this cannot be a viable alternative.

The library name is sinon-mongoose, and may requires to have sinon-as-promised to resolve promises.

The code above can be tested using mocks:

require('sinon');
require('sinon-as-promised');
require('sinon-mongoose');
//code borrowed from the library:  
sinon.mock(Order)
  .expects('find')
  .populate('props_1 props_2')
  .chain('limit').withArgs(10)
  .chain('sort').withArgs('-date')
  .chain('exec')
  .resolves('SOME_VALUE');//Or rejects
//MongooseModel : Order

Conclusion

Testing model function without spinning up the database is feasible. It makes sure unit test scenario run faster. But it comes with a cost: there is a lot of Mocks.

Services

Introduction

The service layer comes in two major flavours. As gateway to third party services integrations, or an abstraction layer on application business logic.

When you integrate with a payment processor, Stripe for example, the number of instances and function calls within your application translate into difficulty you may face, when Stripe goes out of business, or change its function signature for example.

The same applies to when a model is used multiple times, with allmost same signature, when the naming changes in one version to another, the difficulty to rename and retest all function usage instance increases as well.

To mitigate this repetitions, a service layer proved to address these kind of issues pretty well. The service layer makes it possible to use libraries we don't control, the same way as libraries we control. Changing a signature of a function in a library that we don't control, only affects one instance of a library we control: the wrapper function implementd in our service.

Challenge

  • Mock Payment Gateway
  • Mock Database Read/Write operations
  • Mock Third Party Systems

Mocking and Testing Stripe

Stunning stripe with sinon- using stub.yields

Mocking and Testing Redis Pub/Sub

Mocking and Testing Mailgun

Testing Mailgun .send() with Mocha and Sinon

Conclusion.

This section focused on testing Services in isolation, with a focus on stubbing expensive functions, and simulate their results with our mocked data. It also introduced services, as a way to decouple business services scatteled accross Routes/Controllers/Models into one place where they can be tested in isolation.

Middlewares

Introduction

Some well known and widely used Express middlewares are authenticate and cors. The reason behind this thinking is not based on instances, but rather, in one way or another, most NodeJS application have to implement those two middlewares.

This section focuses on how to to mock Request and Response Objects while testing ExpressJS middlewares.

  • Spying if certain calls have been called
  • Make sure the requests don't leave local machine.

Code

var getUsers = require('./controller').getUsers;			

Test

var sinon = require('sinon');
var chai = require('chai');
var expect = chai.expect;

describe("Routes", function() {
    describe("GET Users", function() {
        it("should respond", function() {
            var req,res,spy;
            req = res = {};
            spy = res.send = sinon.spy();
            //to return a value => res.send.
            getUsers(req, res);
            expect(spy.calledOnce).to.equal(true);
            spy.restore(); # If the function will be needed in other places. 
        });     
    });
});
//source: http://www.designsuperbuild.com/blog/unit_testing_controllers_in_express/
  • Particular Case: How to mock a response that will be used with a Streaming Source.

Controller

Introduction

This section focuses on how to test Controllers in NodeJS/Express application. The definition of controller and how controller falls into bigger picture is subject of Project Layout section.

  • It is hard to test not so well organized Controller.
  • It is OK to get a deadlock, that means you are making progress.
  • If the Controller is not testable, moving some parts in their own libraries makes sense.

    • OR: Wrapping Initial Function OR CallThrough
    • This Callback replaces any FindById callback.
    • Which means, We will not be able to execute computations inside the callback(i.e Messenger.Send() —– etc)
    • Somehow we need to wrap previous callback inside this new callback

Challenge

  • Test code and not response

  • Avoid to fall into integration testing, while Unit Testing and vice-versa.

  • How to test express controllers. This article covers Mocking Responses

    • Testing somethong like:
new User(options).save(function(err, user ){
  doMoreThings(user); 
  return next(user);
}); //<- Callback will be completely replaced with the Stub.
  • To make this easy to test:
new User(options).save().then(function(){
  doMoreThings(user);
  return next(user); //[If it is promise itself]
});
  • The best way though, is to move every independent object into a small testable object.
module.exports = function(req, res, next){
  User.findById(req.user, function(error, next){
    if(error) return next(error); 
    new Messenger(options).send().then(function(response){
      redisClient.publish(Messenger.SYSTEM_EVENT, payload));
      //schedule a delayed job 
      return res.status(200).json({message: 'Some Message'});
    });
  });
};
//can be easily turned into: ---- the problem is Object returned by UserService.find()
module.exports = function(req, res, next){
  UserService.findById(req.user).
    then(new Messenger(options).send()).
    then(new RedisService(redisClient).publish(Messenger.SYSTEM_EVENT, payload)).
    then(function(response){ return res.status(200).json(message);}).
    catch(function(error){return next(error);});
};
//To combine responses, above can be merged one after another. 

Combine intermediate resolved

  • There a couple articles that discussed merging two responses from 2 or more resolved responses.

Going down the rabbit hole ~ Passing data between Promise callbacks, Combine data of two async requests to answer both requests, Bluebird has a .join() function ~ works better than Promise.all()

Mocking Request Responses

Going down the rabbit hole ~ For more on mocking requests, this article can be a good starting point

  • Mocking Request Promise – with Mockery

    Route Testing: How to to Mock Responses without hitting the server.

  • The only thing that is mocked here is JSON response.

  • To avoid hitting databases, Controller Action can be spied upon, stubbed and ngrok will mock respond with mocked response.

  • Nock is good if you are doing one of the following:

    • Hitting a third Party REST/SOAP API: Payment, Sending Emails, Tax, Shipping API
    • Updating Third party API from version v1.x.x to version vN.x.x, or downgrading
    • Integrating with OAuth and you are testing behaviour of your application based on some results.
    • Expecting WebHook from another sytem to hit your Endpoint
  • Nock may not be suitable for one of the following:

    • Testing your own endpoints sinc that integration testing
    • When testing your own endpoints, it is better to Mock Models(see below) JavaScript const expect = require('chai').expect; const nock = require('nock'); // controller action method const getUser = require('../index').getUser; // mocked response => module.exports = { data: {} } const response = require('./response');
describe('Get User tests', () => {
    afterEach(() => { /** restore + cleanups */ });
  beforeEach(() => {
    nock('https://api.github.com')
      .get('/users/octocat')
      .reply(200, response);
  });
});
  it('Get a user by username', () => {
    return getUser('octocat')
      .then(response => {
      //expect an object back
      expect(typeof response).to.equal('object');
      //Test result of name, company and location for the response
      expect(response.name).to.equal('The Octocat')
      expect(response.company).to.equal('GitHub')
      expect(response.location).to.equal('San Francisco')
    });
  });

Going down the rabbit hole ~ with following articles: Nock a primer on David Walsh Blog, Using Nock ~ This approach works more than the way I test WebHooks with pre-programmed responses, Unit Testing Express/Mongoose App routes without hitting the database

How to Stub Mongoose Function and Mock Document Objects

  • Unless decided ahead, hitting database slows down Unit Tests.
  • Writing to database all of these changes is not ideally advisable.
  • Alaternatives is to Mock Mongoose/Mongodb connections.
  • The way I do it: Using sinon-mongoose

Order to Stub Mongoose with sinon-mongoose

  • Replace Default promise with Promise A+
  • Replace Mongoose with Sinon-Mongoose
// Using sinon-as-promised with custom promise 
var sinon = require('sinon');
var Promise = require('promise');
require('sinon-as-promised')(Promise);
// Adding sinon-mongoose to mongoose 
var mongoose = require('mongoose');
require('sinon-mongoose');

Without mock library:

var mongoose = require('mongoose');
describe('UserModel', function(){
  before(function(){
    mongoose.connect(process.env.CONNECTION_URL);
  });
  after(function(){
    mongoose.connection.close(); 
    mongoose.disconnect(); 
  });
});

With Mock Library: — without promises i.e with Callbacks

  • Replacing default Mongoose Promise library. ```JavaScript var mongoose = require('mongoose'); mongoose.Promise = require('bluebird');

//to replace underlying mongodb driver, do instead: var uri = 'mongodb://localhost:27017/mongoose_test'; // Use bluebird var options = { promiseLibrary: require('bluebird') }; var db = mongoose.createConnection(uri, options);


- Common library loading
```JavaScript 
require('sinon');
require('mongoose');
require('sinon-mongoose');
  • Example of model definition
    //in model/user.js
    var UserSchema = new mongoose.Schema({name: String});
    UserScheme.statics.findByName(function(name, next){
        //gives: access to Compiled Model
        return this.where({'name': name}).exect(next);
    });
    UserSchema.methods.addEmail(function(email, next){
        //works: with retires un-compiled model
        return this.model('User').find({ type: this.type }, cb);
    });
    //exporting the model 
    module.exports = mongoose.model('User', UserSchema);        
  • Testing
//in model/user.js
var UserSchema = new mongoose.Schema({name: String});
mongoose.model('User', UserSchema);   

Subsquent behaviours such as save() and find() go after before().

// test.spec.js
describe('UserModel', function(){
    after(function(){this.User.restore();});
    before(function(){
        //model is declared in model/user.js
        this.User = mongoose.model('user');
        this.UserMock = sinon.mock(this.User);
    });
});

this fails: – Mock works on Objects i.e models – Save() is defined on Document, and not the model object itself. – This explains why to we spy on prototype: sinon.stub(UserModel.prototype, 'save', cb) – Without mock, is becomes impossible to chain any extra function such as .exec() or .stream() – So a double Stub is requied in such cases – Sinon.stub(UserModel.prototype, 'save', cb).returns({exec: sinon.stub().yields(null, results)}); – Alternatively use .create() instead – Same [here —– but requires a lot of changes] to existing codebase(https://github.com/underscopeio/sinon-mongoose/issues/10#issuecomment-269478458) – Or use [Factory girl]() like in answer

describe('save()', function(){
    it('works', function(){
        var self = this; 
        var user = {name: 'Max Zuckerberg'};  
        var results = Object.assign({}, user, _id: '11122233aabb');
        //yields works for callbacks
        //.chain('sort').withArgs('-date')
        this.UserMock.expects('save').withArgs(user).yields(null, results);
        sinon.stub(this.UserModel.prototype, 'save', cb);//<- should be done in mock fixture 
        new this.User(user).save(function(err, user){
            //add all assertions here. 
            self.UserMock.verify();//verifying 
            self.UserMock.restore();//restore 
        });
    });
});
describe('find()', function(){
    //.chain adds possibility to test various chainings in a find query. 
    //this will be frequent in apps that fetches more than they write
});

PS: – Models should be created once, across all tests.

  • This error: OverwriteModelError: Cannot overwrite `Activity` model once compiled. means one of following occurred:
  • got the caps wrong while importing a models. => import User from 'model/user
  • got wrong definitions of models => var userSchema = new Schema({}); module.exports = mongoose.model('user', userSchema) <=== new schema and not just schema(this was my case)
  • got models twice(two times recompilation) => module.exports = mongoose.model.User || mongoose.model('user', userSchema);

  • QA: StackOverflow

Testing/Mock model pre-hook

With Mock Library: — with promises

//in UserModel.js
MongooseModel.find().limit(10).sort('-date').exec().
then(function(result) {
    //do Things 
    return result;
});
//in user.model.spec.js 
require('sinon');
require('mongoose');
require('sinon-mongoose');
require('sinon-as-promised');
//in user.model.spec.js, the describe section looks like:  
describe('UserModel', function(){
    it('works', function(){
        sinon.mock(MongooseModel)
            .expects('find').withArgs(10)
            .chain('limit').withArgs(10)
            .chain('sort').withArgs('-date')
            .chain('exec')
            .resolves('SOME_VALUE'); //.yields(null, 'SOME_VALUES')       
    });
});

With Mock Library: — paired with streams

//codewise 
UserModelMock.find().stream().pipe(new Transformer()).pipe(res);
//in user.model.specs.js
require('sinon');
require('mongoose');
require('sinon-mongoose');
require('sinon-as-promised');
describe('UserModel', function(){
    it('works', function(){

    });
});
require('sinon');
require('mongoose');
require('sinon-mongoose');
require('sinon-as-promised');
describe('UserModel', function(){

});

Going down the rabbit hole ~

Routes

Few are tutorials that address testing node/express with authenticated routes. Few are resources that address testing fairly large, applications and complexity that comes with them. In fact, a real life node/express server may also start background tasks(cronjobs, monitoring tasks), coupled with a Wesocket endpoint. With many tiers working in concert, adding a line or two can break the whole system.

Challenges while testing ExpressJS Routes

Quite often, I found most of articles talk about integration testing, but few of them gives a hint to have Unit testing on express applications.

  • Test code and not response
  • Mock requests to Payment Gateway, etc.
  • Mock database read/write operations
  • Be able to cover exceptions, missing data structures, etc.
  • Avoid to fall into integration testing, while Unit Testing and vice-versa.
  • Sanity check/Integration tests for client facing endpoints.
  • Use plumber to log incidents while setting up tests
  • Instrumentation and test reporting (used gulp-coverage)

Testing routes without spinning up a server.

Routes should be served while testing. The server may not be up all the time, especially testing within a sandboxed environment(CI server,etc).

  var express = require('../')
  , request = require('./support/http');

describe('req', function(){
  describe('.route', function(){
    it('should be the executed Route', function(done){
      var app = express();

      app.get('/user/:id/edit', function(req, res){

        // test your controllers with req,res here (like below)

        req.route.method.should.equal('get');
        req.route.path.should.equal('/user/:id/edit');
        res.end();
      });

      request(app)
      .get('/user/12/edit')
      .expect(200, done);
    })
  })
});

example from so and supertest. Supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a big deal.

To sum up, Spend extra time to write your tests, it pays off. Effective tests are written before writing code. If you already have the code, good time to add tests is before adding more code.

On long run bugs are expensive for any project. Take it slow

Autenticated Route

Going down the rabbit hole ~ Local Authentication with Passport and Express, BDD-TDD, How to test with Auth0 protected route

Mocking Request

  1. Using node-mocks-http, we can use Request/Response object similar to one provided by http node native library
//method = GET|PUT|POST|DELETE
//url = endpoint to test
var request = httpMock.createRequest({method: method, url: url});

Mocking Response

//initialization(or beforeEach)
var response = httpMock.createResponse({eventEmitter: require('events').EventEmitter});
//Usage: somewhere in tests
controller.useReqRes(request, response);
response.on('end|data', function(error){
  //write tests in this close.
});

Going down the rabbit hole ~

Reading list

Modularization of Express routes

While following a simple principle “make it work”, you realize that route code becomes a huge, and locked into one simple file. Assuming all our models are NOT in same file as our routes files, following source code may be available:

var User = require('./models').User; 
/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

/**
 * More code, more time, more developers 
 * Then you realize that you actually need:
 */ 
app.get('/admin/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

The easy way to mitigate that, is grouping function that are similar into same file. Since the service layer is sometimes not so releavant, we can group functions into a controllers.

//in controller/user.js
module.exports = function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error || !user){
      return next(error); 
    }
    return res.status(200).json(user);
  });
};

//in routes/user.js
var getUser = require('controller/user');
var router = require('express').Router();
router.get('users/:id', getUser);
router.get('admin/:id', getUser);
//exporting the 
module.exports = app;

Both controller/user.js and two routes can be tested in isolation.

Manifest routes

//http://stackoverflow.com/a/5365577/132610
//requiring a directory, will seek for index.js at top of directory 
var routes = require('./routes'); 
//routes will have index.js at /routes directory.
var express = require('express');  
var router = express.Router();
//index
router.get('/', function(req, res, next) {  
  return res.render('index', { title: 'Express' });
});
module.exports = router;  
// routes/users/index.js
var router = require('express').Router();  
router.get('/get/:id', require('./get-user.js'));  
router.post('/new', require('./new-user.js'));  
router.post('/delete/:id', require('./delete-user.js'));  
module.exports = router;    

“The most elegant configuration that I've found is to turn the larger routes with lots of subroutes into a directory instead of a single route file” – Chev source

//route handler
//routes/users/get-user|new-user|delete-user.js
module.exports = function (req, res) {  
  // do stuff
};
// routes/users/index.js
//update when routes/users/favorites/ adds more sub-directories
router.use('/favorites', require('./favorites')); 
/* ... */
module.exports = router;
//Using route and controllers' route handler
//@link http://stackoverflow.com/a/34174691/132610
var router = require('express').Router();
var catalogues = require('./controllers/catalogues');

router.route('/catalogues')
.get(catalogues.getItem)
.post(catalogues.createItem);
module.exports = router;

Going down the rabbit hole ~ More on organizing your nodejs application: An Intuitive Way To Organize Your ExpressJS Routes

WebSockets

Introduction

It is hard to imagine a realtime application that does't use WebSocket at some point nowadays. The success of WebSocket is not only on its secure-able full duplex capabilites, but also an open standard that is supported in major, if not all, Web Servers and Web Browsers.

This chapter introduces some of possible ways to test a server side WebSocket connection, without spinning up an actual WebSocket server. Since Redis is in most of time coupled with WebSocket connections, for authentication and inter-process communication purposes, it makes sense to look at those two components at the same time.

WebSocket — Mocking Redis Interractions.

  • When the application is using redis(local or remote)
  • Multiple tests stress the redis server(local or remote)
  • Mocking the redis interraction makes app run faster, and reduces friction caused by network
  • Make it possible to run without spinning up a redis server.

There are more than one way to go with mocking. I have to preview 3 libraries and choose one the fits better my needs.

Some of libraries are: rewire,fakeredis, proxywire and plain old sinon.

  • Using rewire
var Rewire = require('rewire');
//module to mock redisClient from 
var controller = Rewire("/path/to/controller.js");
//the mock object + stubs
var redisMock = {
  //get|pub|sub are stubs that can return promise|or do other things
  get: sinon.spy(function(options){return "someValue";});
  pub: sinon.spy(function(options){return "someValue";});
  sub: sinon.spy(function(options){return "someValue";});
};
//replacing --- redis client methods :::: this does not prevent spinup a new redis server
controller.__set__('redisClient', redisMock);
  • Using fakeredis: Fake redis provides an thrown in replacement and functionalities for redis's createClient.
var redis = require("redis");    
var fakeredis = require('fakeredis'); 
var sinon = require('sinon'); 
var assert = require('chai').assert; 

var users, client; 
describe('TestCase', function(){
  before(function(){
    sinon.stub(redis, 'createClient', , fakeredis.createClient);
    client = redis.createClient(); //or anywhere in code it can be initialized
  });

  after(function(done){
    client.flushdb(function(error){
      redis.createClient.restore();
      done();
    });
  });
});
  • Using redis-mock

The goal of the redis-mock project is to create a feature-complete mock of [redisnode](https://github.com/mranney/node_redis), so that it may be used interchangeably when writing unit tests for code that depends on Redis_

  • Using proxyquire

Modularization of Redis for testability

  • Having redis.createClient() everywhere, makes it hard to mock. You can not control quite easily creation/deletion of redis instances(pub/sub)
  • One way is to create One instance (preferably while loading top-level module), and inject that instance into depedent modules
//in app|server|index.js   
var client = require("redis").createClient(); 
var app = require("./lib")(client);//<- Injection

var createClient = require('./lib/util/redis');
module.exports = function(redis){
  return function(req, res, next){
    var redisClient = createClient(redis);
    return res.status(200).json({message: 'About Issues'});
  };
};


//usage
var getMessage = require('./')(redis);
//create a redis module that exports a baked client 
const redis = require("redis"); 
const port = process.ENV.REDIS_PORT || "6379";
const host = process.ENV.REDIS_HOST || "127.0.0.1";
module.exports = redis.createClient(port, host);

Another alternative, is to delegate redis.createClient() on a factory.

 - The redis used to create the client, will be the one to be mocked. 
 - This strategy to rethink, application structure has been [found here](https://stackoverflow.com/a/43038690/132610)
    //redis-helper.js 
    module.exports = function(redis){
        return redis.createClient(port, host);
    }
    //

Going down the rabbit hole ~

Socket.IO, Express session sharing

  • It is possible to use session middleware between Socket.IO and Express.
  • The following is doable for any middleware including session
var app = express();
var server = Server(app);
var sio = require("socket.io")(server);

function middleware(req, res, next){
 //session thing
 next();
}

sio.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
sio.sockets.on("connection", function(socket) {
 //socket.request.session 
 //Now it's available from Socket.IO sockets too! Win!
});
//source ~ http://stackoverflow.com/a/25618636/132610

WebSocket Endpoints

Socket.IO, Express session sharing

  • It is possible to use session middleware between Socket.IO and Express.
  • The following is doable for any middleware including session
var app = express();
var server = Server(app);
var sio = require("socket.io")(server);

function middleware(req, res, next) {
  //session thing
  next();
}

sio.use(function(socket, next) {
  middleware(socket.request, socket.request.res, next);
});
//express uses middleware for session management
app.use(middleware);

//somewhere
sio.sockets.on("connection", function(socket) {
//session available  
socket.request.session; 
});

//source http://stackoverflow.com/a/25618636/132610

Going down the rabbit hole ~ The first redis mocking library I looked into was redis mock. You may find it interresting, if not useful in your case. Rewire provides another alternative rewire ~ Easy monkey-patching for node.js unit tests. proxyquire ~ Proxies nodejs require in order to allow overriding dependencies during testing., Faking Redis in Nodejs with Fakeredis , Testing Socket.IO with Mocha, Should.js and Socket.IO Client, Sharing session between Express and SocketIO, Faking Redis in Nodejs with Fakeredis a tutorial, Mock Redis Client, then stub function with sinon ~ rewire

Going down the rabbit hole ~ Testing Socket.IO with Mocha, Should.js and Socket.IO Client and Sharing session between Express and SocketIO

Modular Socket.IO/Express application

Express routes use SocketIO instance to deliver some messages Structure of a socket/socket.io enabled application looks like following:

//module/socket.js
//server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', function connectHandler(){ /**...*/}); 
  io.on('disconnect',function disconnectHandler(){ /**...*/});
};
    
//in server.js 
var express = require('express'); 
var app = express();
var server = require('http').createServer('app');

//Application app.js|server.js initialization, etc. 
require('module/socket.js')(server);       
        

For SocketIO app to use same Express server instance, or sharing route instance with socket.io server

//routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//socket.js - has socket communication code
var io = require('socketio');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

Socket Session sharing

Sharing session between SocketIO and Express application

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between SocketIO and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

Going down the rabbit hole ~ The good way to learn is ask questions, or answering others questions. Some of questions people ask about High Volume, low latency difficulties node/pubsub/redis, examples using redis-store with socket.io, Using Redis as PubSub over Socket.IO” and Modularizing Socket.io with express 4

Going down the rabbit hole ~ By reading following articles about structuring your NodeJS application: Building a Chat Server with node and redis – tests and Bacon.js + Node.js + MongoDB: Functional Reactive Programming on the Server

Servers

Introduction

This section deals with simulation to test the start and stop of a server, as well as checking if the server can attach other application components.

As a quick reminder, NodeJS comes with servers bundled with native code. Modules such as http, https, websocket streams, just to name a few, constitutes servers in some sense.

The challenge relies on how to test major scenarios without actually spinning up a server

Approach

The approach testing the server is two folds: Leveraging module export to modularize the server, second mocking anything that related to spinning up an actual server.

Code

A very basic NodeJS server looks like the following:

var http = require('http');
var hostname = 'localhost';//127.0.0.1
var port = process.env.PORT || 3000;

var server = http.createServer(function(req, res){
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end('Hello World\n');
});

server.listen(port, hostname, function (){
  console.log(['Server running at http://',hostname,':',port].join());
});
//source: https://nodejs.org/api/synopsis.html#synopsis_example

Express Framework provides an alternative to create a server as in the following snippets

var express = require('express')
var app = express()
/** .. more routes + code to initialize your app ... */
app.get('/', function (req, res) {
  return res.status(200).send('Hello World!');
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
//source: https://expressjs.com/en/starter/hello-world.html

As requirement increases, this file becomes exponentially big. Most application runs on top of express.js a popular library in Node world. To keep the server.js small, regardless of requirements and dependent modules, moving most of code into modules makes a difference.

Modularization

Previous example shows how simpler becomes server initialization, but that comes with additional library to install. Modularization of above two code segments make it possible to test the server in isolation.

var express = require('express')
var app = express()
/** .. more routes + code for app ... */
var server = require('http').createServer(app);

app.set('port', port);

app.get('/', function (req, res) {
  res.send('Hello World!');
});

server.listen(app.get('port'), hostname, function() {/* ... */});

//Modularization - this line makes server available in our tests. 
module.exports = server;

//source: https://glebbahmutov.com/blog/how-to-correctly-unit-test-express-server/

Test Case

Modularized version gives a more clean and entry point to test the whole server code. Before testing, it is imperative to mention that server.listen() function, can be stubbed, and mock the response. Stubbing functions that spin up the server is not a good idea while writing Integration Tests.

var http = require('http');
describe('server', function(){
  afterEach(function(){
    this.serverStub.restore();
    this.server.close();
  });
  beforeEach(function(){
    this.serverStub = sinon.stub(http, 'createServer', function(app){
      return Object.assign({}, http.createServer(app), { listen: sinon.spy() });
    });
    this.server = require('./server');
  }); 
  it('works', function(){
      expect(this.server.listen.called, 'Should have called Listen Function');
  });
});

Going deep the rabbit hole ~ How to correctly unit test express server. There is also better code structure organization, that make it easy to test, get coverage, etc at Testing nodejs with mocha

Summary

There is no need to test working legacy code, if it was not for refactoring. Refactoring may be needed to reduce code smell, crack down on bugs invention, or modernize your codebase.

Increased modularization can be in play while refactoring. In following section the stress is more more on modularization of http module, with an introduction of a framework.

NodeJS application server comes into two flavors. Using native NodeJS library, or adopting a server provided by a framework.

Background Jobs

Introduction

A more description of background job is Scheduled Jobs. This section lays grounds to build on, while testing Scheduled Jobs. Job Queue Managers will be abstracted and simulated by mocked objects, to make things a little bit easy.

Among libraries available in JavaScript community, to shedule jobs, Agenda made more sense. The choice is not cast-in-stone, based on project requirements, ones may choose a pretty different solution. Nevertheless, the philosopy is the same.

Agenda was choosen based on its ability to schedule tasks using human readable instructions, being able to persist jobs in a mongodb instance, and having a transparent API. In fact, testing with library feels the same way as testing a callback. Kue may be another library that was considered.

Modularization

To curb efforts spent on testing alone, small chunks of functionalities can be moved to independant libraries. Only direct dependencies that those libraries need, can be tightly coupled, as the last resort.

//jobs/email.js
var EmailService = require('service/email');
/*@param {Object<Agenda>} agenda - instance of agenda initialized by the caller*/
module.exports = function(agenda){
    //using tightly coupled EmailService here.
};

Injecting decoupled agenda makes it possible to easily test the task in isolation, without even needing to import actual agenda package into the project. One way to initialize the Job Scheduler, may also be to use a specific module.

//scheduler/agenda.js
/** 
 * @return {Object<Agenda>}
 */
module.exports = function(){
    var Agenda = require('agenda');
    return new Agenda({/*configurations*/});
};

Aside from working code, most of testing related modularization happens in /fixtures directory. Or /mocks depending on your preference, or school of thoughts we belong to. The following discussion take a quick look on how to modularize Stubs needed needed for Email Service and Mongoose Model that finds a user.

//fixtures/index.js
/**@param {Object<EmailService> EmailService - Object holding stub candidate function*/
module.exports.SendEmailService = function(EmailService){
    return sinon.stub(EmailService, 'send', function(args){
        //replacement of the send function, executes and returns the callback passed to it
        return arguments[arguments.length -1](args); 
    });
}; 

/*@param {Object<MongooseModel> User - Model having save() as a stub candidate*/
module.exports.UserFindById = function(User){
    return sinon.stub(User,'findById', function(){
       //save always returns Error + Mongoose Model Instance 
       return arguments[arguments.length -1](null, MockedUserData); 
    });  
};

/*@param {Object<AgendaInstance> agenda - Object having define as a stub candidate*/
module.exports.DefineAgenda = function(agenda){
    return sinon.stub(agenda,'define', function(job, done){
       //forward passed callback with original Job(or MockedJobData) 
       return arguments[arguments.length -1](job || MockedJobData, done); 
    });  
};

Grouping stub utilities in a module, maximize code re-usability across unit tests.

Code

The following piece of code shows a typical job definition interface. It is made is a way that makes it easy to modularize.

//jobs/email.js
var EmailService = require('./util/email'); 
var User = require('./models/user.js');
module.exports = function(agenda) {
  agenda.define('user onboarding email', function(job, done) {
    User.findById(job.attrs.data.userId, function(err, user) {
       if(err) return done(err);
       	var message = ['Thanks for registering ', user.name, 'more goes here somehow'].join('');
      	return new EmailService(user.email, message).send(done);
     });
  });
  agenda.define('reset password', function(job, done) {/* ... more code*/});
  // More email related jobs
};


A quick example of how this can be integrated in an existing application may looks more like the following. This next source code is provided for illustration purposes, but not for testing. The reference to test such an example can be found in Route/Controller section.

//Job trigger can be used with routes as in following example
//lib/controllers/user-controller.js
var app = require('express')(),
    User = require('./models/user'),
    agenda = require('./scheduler/agenda');
app.post('/users', function(req, res, next) {
  new User(req.body).save(function(err, user) {
    if(err) return next(err);
     //@todo - Schedule an email to be sent before expiration time
     //@todo - Schedule an email to be sent 24 hours
     //This triggers a task to send registration email right away.
     agenda.now('registration email', { userId: user.primary() });
     agenda.schedule('in 24 hours', 'user onboarding email', {userId: user.primary()});
     return res.status(201).json(user);
  });
});

app.listen(port, function(){
   //registering the job somewhere when the server starts 
   require('jobs/email')(agenda);	
});

Test

Dependending on number of dependencies, testing jobs may be a bit complex. The default case includes Mongoose Model, but can also include sending emails or sending

//Things to test
var agenda = require('/'),
    User = require('models/user'),
    EmailService = require('util/email'), 
    EmailSheduledJob = require('jobs/email')(agenda);

//Fixtures - remember that fixtures is a document, but exports are difined in same index file.
var UserFindById = require('fixtures').UserFindById,
    DefineAgenda = require('fixtures').DefineAgenda,
    SendEmailService = require('fixtures').SendEmailService;

//Helpers that help mocking 
describe('SendRegistrationEmail', function(){
    //making sure all stubs are restored after tests
    afterEach(function(){
        this.UserFindByIdStub.restore();
        this.DefineAgendaStub.restore();
        this.SendEmailServiceStub.restore();
    });
    beforeEach(function(){
        this.UserFindByIdStub = UserFindById(User);
        this.DefineAgendaStub = DefineAgenda(agenda);
        this.SendEmailServiceStub = SendEmailService(EmailService);
    });
    it('works', function(){
        EmailScheduledJob(agenda);
        //Assertions goes here - there is nothing to start, the test just runs
        assert(User.findById.called, 'User::FindById was called');
        assert(agenda.define.called, 'Agenda::Define was called');
        assert(EmailService.send.called, 'EmailService::send was called');
    });
});

Conclusion

Breaking down the route into smaller, library like modules makes it easy, not only for testing purposes, but also for maintenance purposes. In case of a problem, isolated code tends to be easier to debug, than spaghetti code.

Addendum

Deployment

A typical NodeJS deployment follows, in one way or another, following steps: – download source code using git, wget, npm or any other package manager of your choice – configure, or injecting, environment variables – symlink vital directories such as log, config, nginx config – restart any other dependent services the application needs to run, for instance database(mongodb, couchdb, etc), data-store(redis, etc.) load balancers or web servers (nginx, etc)
– restart application server

# Using Git to pull latest code
$ sudo git pull                 # or > git clone git-server/username/appname.git
# Using npm 
$ sudo npm install appname      # requires to have access to service on hosted package manager

# do manual or automated configuration here
# do manual or automated symlink here 

# restarting dependent services 
$ sudo service nginx restart    # nginx|apache server
$ sudo service redis restart    # redis server
$ sudo service restart mongod   # database server in some cases

# restarting application server
$ sudo service appname restart  # application itself, in our case: hoogy

# rollback(revert symlinking) when something goes awly bad here.

PS: Above services are managed with uptime

Reducing the number of steps is a must while automating the whole process. If one of above steps breaks, it is better to have a rollback strategy in place. Tagging releases and using versioning while packaging application make the whole process even easier.

Friction

One way old way to reduce friction to achieve faster deployments, is to bundle application together with their dependencies. As a quick example, Java releases .jar|.war files, in which all dependency libraries are bundled into one exectuable software.

Rule of thumb “Build your dependencies into your deployable packages”

In JavaScript in general, and NodeJS in particular, most common tactic to reduce friction is to publish your application as npm package. In case you do not want to purchase yet another subscription, you still have alternative to host npm compatible package to github.

Down the rabbit hole ~ One way of reducing friction while deploying NodeJS application is by using containers. Getting started with Kubernetes and NodeJS can help you getting started with managing deployments with Kubernetes

Push to deploy

The push-to-deploy model, is yet another alternative to go to production often, faster and kind of safe. The push-to-deploy model, democratizes deployments procedures, and makes it easy to spot, fix and release new patches to fix issues relatively faster classic massive deployment.

The drill works as following, a push to live or master branch triggers code download on live server. A Post-receive hook detects end of download and runs deployment scripts.

If anything goes bad, the step to symlink and restart servers doesn't happen, hence preserving integrity of your application. In case everything works as planned, the symlink+restart servers step executes, resulting in a successful release and deployment. This process if commonly known as Continous Deployment.

# Server Side Code
$ apt-get update 		# first time on server side
$ apt-get install git	# first time git install 
$ apt-get update 		# updating|upgrading server side code

# create bare repository + post-receive hook 
# @link http://seanvbaker.com/using-git-to-deploy-node-js-sites-on-ubuntu/
# first time initiaalization
$ cd /path/to/git && mkdir appname.git
$ cd appname.git
$ git --bare init

# Post-Receive Hook
cd /path/to/git/appname.git/hooks
touch post-receive
# text to add in post-receive
>>>#!/bin/sh
>>>GIT_WORK_TREE=/path/to/git/appname git checkout -f

# change permission to be an executable  file
chmod +x post-receive

# Restart Services + Servers 

Git WebHook

Alternatively, but more advanced, push-to-deploy model may be used with WebHooks. WebHooks are lingua franca of web services. It provide means of sending command to remote instances, the same way REST works, but this time between machines.

Going down the rabbit hole ~ since this document is not about systems design, following are articles that may help understanding more on this feature 1) Continuous deployment with github + gith, 2) Setting up push-to-deploy with git – Rollback strategy

Build servers

The push-to-deploy model looks attractive, but comes with big risks. In a larger team, how do you guarantee safety of every deployment? One way is to run pre-push|commit tasks to analyse code quality. Some people developers may comply, and some others may go rogue. Needless to say, it may take time to update all sanity check scripts across the development team.

A centralized, platform and developer independent system that checks sanity and determine if code can integrate well with an existing system is hallmark. This is how build servers come into the picture. The build servers are servers tasked to receive release candidates, execute test and build tasks, green-light or red-light releases for production. In case a release has been green-lighted, the code continues to production(Continous Delivery) or tagged(Continous Deployment).

Build servers can also be referred to as Continous Integration server, especially when their tasks go beyond building packages.

Going down the rabbit hole ~ With this non-exhaustive list of CI servers 1)Distelli, 2)Magnum, 3) Strider 4) Codeship and many more.

Zero downtime

NodeJS server, like any server indeed, may go down for various reasons. Even though this document doesn't focus on product maintenance, following ideas may, nevetherless, be good know. Some of reasons applications experience downtime may be detected using events such as uncaughtException, unhandledRejection or SIGTERM(UNIX termination signal). Same mechanism is applied when updating application code base, to achieve zero downtime while deploying latest version.

To recover from the failure, events stated above give a second chance to applications that leverages cluster api, to restart failing processes. The drill works as following: the master cluster process waits for SIGHUP (updates/code push) signal, and sequentially terminates old processes before it starts new child processes. You can find this gist useful.

Another more common way is to deploy to platforms such as Heroku, OpenShift, commonly known as PaaS(Platform as a service). Container based deployments such as Docker or Kubernetes also makes it possible to deply new code with zero downtime. These platforms spin up new servers on every new pushed/cleared version, and provide rollback when as soon as any deployment fails.

Going down the rabbit hole ~ More resources that can help achieve zero downtime deployment:Reloading node with zero downtime, Setting up express with nginx and pm2 is another helpful blog post, Zero-Downtime automated Node.js deployment, Zero downtime redeploys, Deploying and Scaling Zero Downtime NodeJS application , you may also be amazed by Hardening node.js for production part 3: zero downtime deployments with nginx

Monitoring

Downtimes will always happen, however your system has been tuned to avoid these. The worst nightmare, is when you are not able to know ontime that actually some sub-systems, at some extent whole systems, went down. This scenario is what monitoring agents are for.

The very rudimentary monitoring service, is to trigger email(notification/text message) catch certain events. 1) in code shows examples of possible events, 2) provides a typical event handler that can be re-used across events. In nutshell, NodeJS provides some events before killing the server. For there, It becomes possible to tap into those events, and trigger a notification to system administrator. Since the application may not recover from some of the events, it is wise to rely on third party messaging service to deliver such notifications. The triggerNotification function is using mailgun as an example.

  //1.
  function triggerNotification(event){ mailgun.send({message}); }
  //2.
  process.on('uncaughtException', triggerNotification); 
  process.on('unhandledRejection', triggerNotification);
  process.on('SIGHUP', triggerNotification);
  process.on('SIGTERM', triggerNotification);

Goind down the rabbit hole ~ with Some third party services that can help know when something goes wrong are – UptimeMonitoring-dashboard

Infrastructure

For practical reasons, and from customer standpoint, it is imperative that your application provides 90%+ uptime. One strategy to make zero downtime a reality, is to break down larger systems into smaller sub-systems. Smaller sub-systems may not necessarily translate into microservices. Installable libraries, also known as packages, are a good example of sub-system, same goes to frameworks.

For ease of large scale application maintanance sake, deploying smaller sub-systems to various platforms makes it possible to achieve zero downtime. Since this section offered rather raw ideas, I curated a reading list in next section, about infrastructure and achieving zero downtime.

Going down the rabbit hole ~ If you want to know more about instrastructure, and how to “deploy your site through Netlify and add HTTPS, CDN distribution, caching, continuous deployment”, you definitely should visit Netifly

Memory Leak

Managing memory leaks in JavaScript applications can be a daunting task, needless to say for NodeJS environment. For time being, this document doesn't provide tips on memory leaks, but rather provides curated list of articles that can help taming the beast:

Going down the rabbit hole ~ with a couple articles you can find more information on memory leaks in Nodejs 1)Hunting a Ghost – Finding a Memory Leak in Node.js, 2)Simple Guide to Finding a JavaScript Memory Leak in Node.js and 3)Tracking down Memory leaks in NodeJS – A NodeJS Holiday Season, 3) How to self detect a memory leak in node

Documentation

Documentation is a vital tool to support code health over the long period of time. Good documentation makes sure knowledge is transferable easily to anyone who is going to work on your code in future. Automated tests are an integral part of knowledge sharing, when done right.

Some tools you can look into to keep in sync documentation and code changes are listed in following section

Going down the rabbit hole ~ With API documentation the easy way with Slate. Slate is like Swagger, but more sexy. To generate documentations based on code comments, DocumentationJS or jsdoc or docco can help you out.

Reading List

A list of additional important ressources for testing nodejs applications.

References