Simple Engineering

annotations

WebSocket protocol is an extension to the HTTP protocol that makes near real-time communication magic a reality. Adding this capability to an already complex application does not make large-scale applications any easier to work with. Using modularization techniques to decoupling the real-time portion of the application makes maintenance a little easier. The question is How do we get there?. This article applies modularization techniques to achieve that.

There is a wide variety of choice to choose from when it comes to WebSocket implementation in nodejs ecosystem. For simplicity, this blog post will provide examples using socket.io, but ideas expressed in this blog are applicable to any other nodejs WebSocket implementation.

In this article we will talk about:

  • How to modularize WebSocket for reusability
  • How to modularize WebSocket for testability
  • How to modularize WebSocket for composability
  • The need for a store manager in a nodejs WebSocket application
  • How to integrate redis in a nodejs WebSocket application
  • How to modularize redis in a nodejs WebSocket application
  • How to share session between an HTTP server and WebSocket server

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//in server.js 
var express = require('express'); 
var app = express();
...
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});
...
var server = require('http').createServer(app);
server.listen(app.get('port'), () => console.log(`Listening on ${ process.env.PORT || 8080 }`));
var wss = require('socket.io')(server);
//Handling realtime data
wss.on('connection|connect', (socket, event) => {
    socket.on('error', () => {});
    socket.on('pong', () => {this.isAlive = true;});
    socket.on('disconnect', () => {});
    socket.on('message', () => {});
});

What can possibly go wrong?

The following points may be a challenge when modularizing WebSocket nodejs applications:

  • WebSocket handlers are tightly coupled to the rest of the application. The challenge is how to reverse that.
  • How to modularize for optimal code reuse and easy testability

The following sections will explore more on making points stated above work.

How to modularize WebSocket for reusability

When looking at the WebSocket handlers, there is something that strikes our eyes. Every handler has a signature that looks like any event handler common in the JavaScript ecosystem. We also realize that handlers are tightly coupled to the WebSocket object. To break the coupling, we can apply one technique: eject handlers from WebSocket, inject WebSocket and Socket objects whenever possible(composition).

How to modularize WebSocket for testability

As noted earlier, the WebSocket event handlers are tightly coupled to the WebSocket object. Mocking the WebSocket object comes with a hefty price: to lose implementation of the handlers. To avoid that, we can tap into two techniques: eject handlers, and loading the WebSocket via a utility library.

How to modularize WebSocket for composability

It is possible to shift the perspective on the way the application is adding WebSocket support. The question we can also ask is: Is it possible to restructure our code, in such a way that it requires only one line of code to wipe out the WebSocket support?. An alternative question would be: Is it possible to add WebSocket support to the base application, only using one line of code?. To answer these two questions, we will tap into a technique similar to the one we used to mount app instance to a set of routers(API for example)

The need of a store manager in a nodejs WebSocket application

JavaScript, for that matter nodejs, is a single-threaded programming language.

However, that does not mean that parallel computing is not feasible. The threading model can be replaced with a process-based model when it comes to parallel computing. This enhancement comes with an additional challenge: How to make it possible for processes to communicate or share data, especially when processes are running on two separate CPUs.

The answer is using a third-party process/es that handles inter-process communications. Key stores are good examples that make the magic possible.

How to integrate redis in a nodejs WebSocket application

redis comes with an expressive API that makes it easy to integrate with an existing nodejs application.

It makes sense to question the approach used while adding this capability to the application. In the following example, any message received on the wire will be logged into the shared redis key store.

All subscribed message listeners will then be notified about an incoming message. In the event there is a response to send back, the same approach will be followed, and the listener will be responsible to send the message again down the wire. This process may be repetitive, but it is one of the good ways to handle this kind of scenario.

There is an entire blog dedicated to modularizing redis clients here

How to modularize redis in a nodejs WebSocket application

The example of integration with redis in nodejs application is tightly coupled to redis event handlers. Ejecting handlers can be a good starting point. Grouping ejected handlers in a module can follow suit. The next step in modularization can be composing(inject redis) on the resulting modules when needed.

How to share sessions between the HTTP server and WebSocket server.

If we look closer, especially when dealing with namespaces, we find a similarity between HTTP requests(handled by express in our example) and WebSocket messages(handled by socket.io in our example). For applications that require authentication, or any other type of session on the server-side, it would be not necessary to have one authentication per protocol. To solve this problem, we will rely on a middleware that passes session data between two protocols.

Modularization reduces the complexity associated with large scale node js applications in general. We assume thatsocket.io/expressjs` applications won't be an exception in the current context. In a real-time context, we focus on making most parts accessible to be used by other components and tests.

Express routes use socket.io instance to deliver some messages Structure of a socket/socket.io enabled application looks like following:

//module/socket.js
//server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', fn); 
  io.on('disconnect',fn);
};
    
//in server.js 
var express = require('express'); 
var app = express();
var server = require('http').createServer('app');

//Application app.js|server.js initialization, etc. 
require('module/socket.js')(server);       
        

For socket.io app to use same Express server instance or sharing route instance with socket.io server

//routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//socket.js - has socket communication code
var io = require('socket.io');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

Socket Session sharing

Sharing session between socket.io and Express application

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between `socket.io` and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings not only elegance but makes copy/paste detectors happy, and at the same time improves both performance and testability.

In this article, we revisited how to aggregate WebSocket code into composable and testable modules. The need to group related tasks into modules involves the ability to add support of Pub/Sub on demand and using various solutions as project requirements evolve. There are additional complimentary materials in the “Testing nodejs applications” book.

References + Reading List

tags: #snippets #code #annotations #question #discuss

Testing functions attached to objects, other than a class instance, constitutes an intimidating edge case from first sight. Such objects range from object literals to modules. This blog explores some test doubles techniques to shine a light on such cases.

For context, the difference between a function and a method, is that a method is a function encapsulated into a class.

In this article we will talk about:

  • Key difference between a spy, stub, and a fake
  • When it makes sense a spy over a stub

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var fs = require('fs');

module.exports.removeUserPhoto = function(req, res, next){
    let filepath = `/path/to/photos/${req.params.photoId}.jpg`;
    fs.unlink(filepath, (error) => {
        if(error) return next(error);
        return res.status(200).json({
            message: `Your photo is removed - Photo ID was ${req.params.photoId}`
        });
    });    
}

Example: A simple controller that takes a PhotoID and deletes files associated to it

What can possibly go wrong?

Some challenges when mocking chained functions:

  • Stubbing a method, while keeping original callback behavior intact

Show me the tests

From the How to mock chained functions article, there are three relevant to the current context avenues we leverage for our mocking strategy.


let outputMock = { ... };
sinon.stub(obj, 'func').returns(outputMock);
sinon.stub(obj, 'func').callsFake(function fake(){ return outputMock; })
let func = sinon.spy(function fake(){ return outputMock; });

We can put those approaches to test in the following test case

var sinon = require('sinon');
var assert = require('chai').assert;

// Somewhere in your code. 
it('#fs:unlink removes a file', function () {
    this.fs = require('fs');
    var func = function(fn){ return fn.apply(this, arguments); };//mocked behaviour 
    
    //Spy + Stubbing fs.unlink function, to avoid a real file removal
    var unlink = sinon.stub(this.fs, "unlink", func);
    assert(this.fs.unlink.called, "#unlink() has been called");

    unlink.restore(); //restoring default function 
});

Conclusion

In this article, we established the difference between stub/spy and fake concepts, how they work in concert to deliver effective test doubles, and how to leverage their drop-in-replacement capabilities when testing functions.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

Mocking and Stubbing walk hand in hand. In this blog, we document stubbing functions with promise constructs. The use cases are going to be based on Models. We keep in mind that there is a clear difference between mocking versus stub/spying and using fakes.

In this article we will talk about:

  • Stub a promise construct by replacing it with a fake
  • Stub a promising construct by using third-party tools
  • Mocking database-bound input and output

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


//Lab Pet
window.fetch('/full/url/').then(function(res){ 
    service.doSyncWorkWith(res); 
    return res; 
}).catch(function(err){ 
    return err;
});

Example:

What can possibly go wrong?

When trying to figure out how to approach stub functions that return a promise, the following points may be a challenge:

  • How to deal with the asynchronous nature of the promise.
  • Making stubs drop-in replacements of some portion of the code block, and leave intact anything else.

The following sections will explore more on making points stated above work.

Content

  • From Johnny Reeves Blog: Stub the services' Async function, then return mocked response

var sinon = require('sinon');
describe('#fetch()', function(){
    before(function(){ 
        //one way
        fetchStub = sinon.stub(window, 'fetch').returns(bakedPromise(mockedResponse));
        //other way
        fetchStub = sinon.stub(window, 'fetch', function(options){ 
            return bakedPromise(mockedResponse);
        });
        //other way
        fetchStub = sinon.stub(window, 'fetch').resolves(mockedResponse);

    });
    after(function(){ fetchStub.restore(); });
    it('works', function(){
        //use default function like nothing happened
        window.fetch('/url');
        assert(fetchStub.called, '#fetch() has been called');
        //or 
        assert(window.fetch.called, '#fetch() has been called');
    });
    it('fails', function(){
            //one way
        fetchStub = sinon.stub(window, 'fetch', function(options){ 
            return bakedFailurePromise(mockedResponse);
        });
        //another way using 'sinon-stub-promise's returnsPromise()
        //PS: You should install => npm install sinon-stub-promise
        fetchStub = sinon.stub(window, 'fetch').returnsPromise().rejects(reasonMessage);

    });
});

Example:

  • bakedPromise() is any function that takes a Mocked(baked) Response and returns a promise
  • This approach doesn't tell you if Service.doJob() has been expected. For That:
  • source
  • source

Conclusion

In this article, we established the difference between Promise versus regular callbacks and how to stub promise constructs, especially in database operations context, and replacing them with fakes. Testing tends to be more of art, than science, proactive makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

divide et impera

One of the key issues working with large-scale nodejs applications is the management of complexity. Modularization shifts focus to transform the codebase into reusable, easy-to-test modules. This article explores some techniques used to achieve that.

This article is more theoretical, “How to make nodejs applications modular” may help with that is more technical.

In this article we will talk about:

  • Exploration of modularization techniques available within the ecosystem
  • Leveraging module.exports or import/export utilities to achieve modularity
  • Using the index file to achieve modularity
  • How above techniques can be applied at scale

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

This piece of code is going to go through modularization in “How to make nodejs applications modular” blog. As for now, we will highlight failures and points of interest down below.

var express = require('express');
var app = express();

/**Data Layer*/
var mongodb = require("mongodb");
mongoose.connect('mongodb://localhost:27017/devdb');
var User = require('./models').User; 

/**
 * Essential Middelewares 
 */
app.use(express.logger());
app.use(express.cookieParser());
app.use(express.session({ secret: 'angrybirds' }));
app.use(express.bodyParser());
app.use((req, res, next) => { /** Adding CORS support here */ });

app.use((req, res) => res.sendFile(path.normalize(path.join(__dirname, 'index.html'))));


/** .. more routes + code for app ... */
app.get('/', function (req, res) {
  return res.send('Hello World!');
});


/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

/**
 * More code, more time, more developers 
 * Then you realize that you actually need:
 */ 
app.get('/admin/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});
/**
 * This would work just fine, but we may also have a requirement to listen to Twitter changes 
app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
*/

var server = require('http').createServer(app);
server.listen(app.get('port'), () => console.log(`Listening on ${ process.env.PORT || 8080 }`));
var wss = require('socket.io')(server);
//Handling realtime data
wss.on('connection'(socket, event) => {
    socket.on('error', () => {});
    socket.on('pong', () => {});
    socket.on('disconnect', () => {});
    socket.on('message', () => {});
});

Example:

What can possibly go wrong?

When trying to navigate strategies around modularization of nodejs applications, the following points may be a challenge:

  • Where to start with modularization
  • How to choose the right modularization technique.

The following sections will explore more on making points stated above work.

Modules

In nodejs context, anything from a variable to function, to classes, or an entire library qualifies to become modules.

A module can be seen as an independent piece of code dedicated to doing one and only one task at a time. The amalgamation of multiple tasks under one abstract task, or one unit of work, is also good module candidates. To sum up, modules come in function, objects, classes, configuration metadata, initialization data, servers, etc.

Modularization is one of the techniques used to break down a large software into smaller malleable, more manageable components. In this context, a module is treated as the smallest independent composable piece of software, that does only one task. Testing such a unit in isolation becomes relatively easy. Since it is a composable unit, integrating it into another system becomes a breeze.

Leveraging exports

To make a unit of work a module, nodejs exposes import/export, or module.exports/require, utilities. Therefore, modularization is achieved by leveraging the power of module.exports in ES5, equivalent to export in ES7+. With that idea, the question to “Where to start with modularization?” becomes workable.

Every function, object, class, configuration metadata, initialization data, or the server that can be exported, has to be exported. That is how Leveraging module.exports or import/export utilities to achieve modularity looks like.

After each individual entity becomes exportable, there is a small enhancement that can make importing the entire library, or modules, a bit easier. Depending on project structure, be feature-based or kind-based.

At this point, we may ask ourselves if the technique explained above can indeed scale. Simply put, Can the techniques explained above scale?

The large aspect of large scale application combines Lines of Code(20k+ LoC), number of features, third party integrations, and the number of people contributing to the project. Since these parameters are not mutually exclusive, a one-person project can also be large scale, it has to have fairly large lines of code involved or a sizable amount of third-party integrations.

nodejs applications, as a matter of fact like any application stack, tend to be big and hard to maintain past a threshold. There is no better strategy to manage complexity than breaking down big components into small manageable chunks.

Large codebases tend to be hard to test, therefore hard to maintain, compared to their smaller counterparts. Obviously, nodejs applications are no exception to this.

Leveraging the index

Using the index file at every directory level makes it possible to load modules from a single instruction. Modules at this point in time, are supposed to be equivalent or hosted in the same directory. Directories can mirror categories(kind) or features, or a mixture of both. Adding the index file at every level makes sure we establish control over divided entities, aka divide and conquer.

Divide and conquer is one of the old Roman Empire Army techniques to manage complexity. Dividing a big problem into smaller manageable ones, allowed the Roman Army to conquer, maintain and administer a large chunk of the known world in middle age.

Scalability

How the above techniques can be applied at scale

The last question in this series would be to know if the above-described approach can scale. First, the key to scalability is to build things that do not scale first. Then when scalability becomes a concern, figure out how to address those concerns. So, the first iteration would be supposed to not be scalable.

Since the index is available to every directory, and the index role becomes to expose directory content to the outer world, it doe not matter if the directory count yields 1 or 100 or 1000+. A simple call to the parent directory makes it possible to have access to 1, 100, or 1000+ libraries.

From this vantage point, introduction of the index at every level of the directory comes with scalability as a “cherry on top of the cake”.

Where to go from here

This post focused on the theoretical side of the modularization business. The next step is to put techniques described therein put to test in the next blog post.

Conclusion

Modularization is a key strategy to crafting reusable composable software components. It brings elegance to the codebase, reduces copy/paste occurrences(DRY), improves performance, and makes the codebase testable. Modularization reduces the complexity associated with large-scale nodejs applications.

In this article, we revisited how to increase key layers testability, by leveraging basic modularization techniques. Techniques discussed in this article, are applicable to other aspects of the nodejs application. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

Is it possible to use one instance of nginx, to serve as a reverse proxy of multiple application servers running on dedicated different IP addresses, under the same domain umbrella?

This article is about pointing in direction on how to achieve that.

Spoiler: It is possible to run nginx server both as a reverse proxy and load balancer.

In this article we will talk about:

  • Configure nginx as a nodejs reverse-proxy server
  • Proxy multiple IP addresses under the same banner: load balancer

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Installation

The magic happens at the upstream nodeapps section. This configuration plays the gateway role and makes public a server that was otherwise private.


upstream webupstreams{
  # Directs to the process with least number of connections.
  least_conn;
  server 127.0.0.1:8080 max_fails=0 fail_timeout=10s;
  server localhost:8080 max_fails=0 fail_timeout=10s;

  server 127.0.0.1:2368 max_fails=0 fail_timeout=10s;
  server localhost:2368 max_fails=0 fail_timeout=10s;
  keepalive 512;

  keepalive 512;
}

server {
  listen 80;
  server_name app.website.tld;
  client_max_body_size 16M;
  keepalive_timeout 10;

  # Make site accessible from http://localhost/
  root /var/www/[app-name]/app;
  location / {
    proxy_pass http://webupstreams;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
  }
}
server {
    listen 80;
    server_name blog.website.tld;
    access_log /var/log/blog.website.tld/logs.log;
    root /var/www/[cms-root-folder|ghost|etc.]

    location / {
        proxy_pass http://webupstreams;
        #proxy_http_version 1.1;
        #proxy_pass http://127.0.0.1:2368;
        #proxy_redirect off;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
    }
}

Example: Typical nginx configuration at /etc/nginx/sites-available/app-name

This article is an excerpt from “How to configure nginx as a nodejs application proxy server” article.

Conclusion

In this article, we revisited how to proxy multiple servers via one nginx instance, or nginx load-balancer for short. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

This article is going to explore how to deploy a nodejs application on a traditional linux server — in a non-cloud environment. Even though the use case is Ubuntu, any Linux distro or mac would work perfectly fine.

For information on deploying on non-traditional servers, read: “Deploying nodejs applications”. For zero-downtime knowledge, read “How to achieve zero downtime deployment with nodejs

In this article we will talk about:

  • Preparing nodejs deployable releases
  • Configuring nodejs deployment environment
  • Deploying nodejs application on bare metal Ubuntu server
  • Switching on nodejs application to be available to the world ~ adding nginx for the reverse proxy to make the application available to the world
  • post-deployment support — production support

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book book, the content can help any software developer to level up their knowledge. You may use this link to buy the book. Testing nodejs Applications Book Cover

Preparing a deployable release

There are several angles to look at release and deployment from. There are also several ways to release nodejs code, npm and tar for instance, and that depending on the environment in which the code is designed to run. Amongst environments, server-side, universal, or command line are classic examples.

In addition, we have to take a look from a dependency management perspective. Managing dependencies at deployment time has two vectors to take into account: whether the deployment happens on online or offline.

For the code to be prepared for release, it has to be packaged. Two methods of packaging nodejs software, amongst other things, are managed packaging and bundling. More on this is discussed here

As a plus, versioning should be taken into consideration when preparing a deployable release. The versioning that is a little common circulation is SemVer.

Configuring deployment environment

Before we dive into deployment challenges, let's look at key software and configuration requirements.

As usual, first-time work can be hard to do. But the yield should be then predictable, flexible to improvement, and capable of being built-upon for future deployments. For context, the deployment environment we are talking about in this section is the production environment.

Two key configurations are an nginx reverse proxy server and nodejs. But, “Why coupling nodejs server to an nginx reverse proxy server”? The answer to this question is two folds. First, both nodejs and nginx are single-threaded non-blocking reactive systems. Second, the wide adoption of these two tools by the developer community makes it an easy choice, from both influence and availability of collective knowledge the developer community shares via forums/blogs and popular QA sites.

How to install nginx server ~ [there is an article dedicated to this](). How to configure nginx as a nodejs application proxy server ~ there is an article dedicated to this.

Additional tools to install and configure may include: mongod database server, redis server, monit for monitoring, upstart for enhancing the init system.

There is a need to better understand the tools required to run nodejs application. It is also our responsibility as developers to have a basic understanding of each tool and the roles it plays in our project, in order to figure out how to configure each tool.

Download source code

Starting from the utility perspective, there is quite a collection of tools that are required to run on the server, alongside our nodejs application. Such software needs to be installed, and updated ~ for patch releases, and upgraded ~ to new major versions to keep the system secure and capable(bug-free/enhanced with new features).

From the packaging perspective, both supporting tools and nodejs applications adhere to a packaging strategy that makes it easy to deploy. When packaging is indeed a bundle, wget/curl can be used to download binaries. When dealing with discoverable packages, npm/yarn/brew can also be used to download our application and its dependencies. Both operation yield same outcomes, which is un-packaging, configuration and installation.

To deploy versioned nodejs application on bare metal Ubuntu server ~ understanding file system tweaks such as symlink-ing for faster deployments can save time for future deployments.

#first time on server side  
$ apt-get update
$ apt-get install git

#updating|upgrading server side code
$ apt-get update
$ apt-get upgrade
$ brew upgrade 
$ npm upgrade 

# Package download and installs 
$ /bin/bash -c "$(curl -fsSL https://url.tld/version/install.sh)"
$ wget -O - https://url.tld/version/install.sh | bash

# Discoverable packages 
$ npm install application@next 
$ yarn add application@next 
$ brew install application

_Example: _

The command above can be automated via a scheduled task. Both npm and yarn support the installation of applications bundled in a .tar file. See an example of a simple script source. We have to be mindful to clean up download directories, to save disk space.

Switching on the application

It sounds repetitive, but running npm start does not guarantee the application to be visible outside the metal server box the application is hosted on. This magic belongs to the nginx reverse proxy we were referring to in earlier paragraphs.

A typical nodejs application needs to start one or more of the following services, each time to reboot the application.

# symlinking new version to default application path
$ ln -sfn /var/www/new/version/appname /var/www/appname 

$ service nginx restart #nginx|apache server
$ service redis restart #redis server
$ service restart mongod #database server in some cases
$ service appname restart #application itself

Example:

PS: Above services are managed with uptime

Adding nginx reverse proxy makes the application available to the outside world. Switching on/off the application can be summarized in one command: service nginx stop. Likewise, to switch off and back on can be issued in one command: service nginx restart.

Post-deployment support

Asynchronous timely tasks can be used to resolve a wide range of issues. Background tasks such as fetching updates from third-party data providers, system health check, and notifications, automated software updates, database cleaning, cache busting, scheduled expensive/CPU intensive batch processing jobs just to name a few.

It is possible to leverage existing asynchronous timely OS-provided tasks processing infrastructure to achieve any of the named use cases, as it is true for third-party tools to do exactly the same job.

Rule of thumb

The following is a mental model that can be applied to the common use cases of releases. It may be basic for DevOps professionals, but useful enough for developers doing some operations work as well.

  • Prepare deployable releases
  • Update and install binaries ~ using apt, brew etc.
  • Download binaries ~ using git,wget, curl or brew
  • symlink directories(/log, /config, /app)
  • Restart servers and services ~ redis, nginx, mongodb and app
  • When something goes bad ~ walk two steps back. That is our rollback strategy.

This model can be refined, to make most of these tasks repeatable and automated, deployments included.

Conclusion

In this article, we revisited quick easy, and most basic nodejs deployment strategies. We also revisited how to expose the deployed applications to the world using nginx as a reverse proxy server. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Building, testing, deploying, and maintaining large-scale applications is challenging in many ways. It takes discipline, structure, and rock-solid processes to succeed with production-ready nodejs applications. This document put together a collection of ideas and tribulations from a personal perspective so that you can avoid some mistakes and succeed with your project.

In this article we will talk about:

  • Avoiding integration test trap
  • Mocking strategically or applying code re-usability to mocks
  • Achieve a healthy test coverage

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Why

There is a lot of discussion around Unit testing. Developers don't like testing, only the testing approach differs from one person to another, from one system to the next one. It is also worth highlighting that some, if not the majority, skip TDD's way of doing business for alternatives. One thing is clear: You cannot guarantee the sanity of a piece of code unless it is tested. The “HOW” may be the problem we have to figure out and fix. In other words, should a test be carried out before or after writing the code?

Pro: Tests(Unit tests)

  • Increases confidence when releasing new versions.
  • Increases confidence when changing the code, such as during refactoring exercises.
  • Increase overall code health, and reduces bug count
  • Helps new developers to the project better understand the code

Cons:

  • Takes time to write, refactor and maintain
  • Increases codebase learning curve

What

There is a consensus that every feature should be tested before landing in a production environment. Since tests tend to be repetitive, and time-consuming, it makes sense to automate the majority of the tests, if not all. Automation makes it feasible to run regression tests on quite a large codebase and tends to be more accurate and effective than manually testing alone.

Layers that require particular attention while testing are:

  • Unit test controllers
  • Unit test business logic(services) and domain (models)
  • Utility library
  • Server start/stop/restart and anything in between those states
  • Testing routes(integration testing)
  • Testing secured routes

Questions we should keep in our mind while testing is: How to create good test cases? (Case > Feature > Expectations ) and How to unit test controllers, and avoid test to be integration tests

The beauty of having nodejs, or JavaScript in general, is that to some extent, some test cases can be reusable for back-end and for front-end code as well. Like any other component/module, unit test code should be refactored for better structure, readability than the rest of the codebase.

Choosing Testing frameworks

For those who bought in the idea of having a TDD way of doing business, here are a couple of things to consider when choosing a testing framework:

  • Learning curve
  • How easy to integrate into project/existing testing frameworks
  • How long does it take to debug testing code
  • How good is the documentation
  • How good is the community backing the testing framework, and how well the library happens to be maintained
  • How test doubles (Spies, Mocking, Coverage reports, etc) work within the framework. Third-party test doubles tend to beat framework native test doubles.

Conclusion

In this article, we revisited high-level objectives when testing a nodejs application deployable at scale. There are additional complimentary materials in the “Testing nodejs applications” book that dives deeper into integration testing trap and how to avoid it, how to achieve a healthy code coverage without breaking the piggy bank, as well as some thoughts on mocking strategically.

References

#snippets #code #annotations #question #discuss

In the real world, jargon is sometimes confusing to the point some words may lose their intended meaning to some audiences. This blog re-injects clarity on some misconceptions around testing jargon.

In this article we will talk about:

  • Confusing technical terms around testing tools
  • Confusing technical terms around testing strategies
  • How to easily remember “which is what” around testing tools
  • How to easily remember “which is what” around test strategies

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//order/model.js
Order.find()
	.populate()
	.sort()
	.exec(function(err, order){ 
        /** ... */
    });

//order/middleware.js
new Order(params).save((error, order, next) => {
    if(error) return next(error, null);
    new OrderService(order).scheduleShipping();
    return next(null, order);
});

Example: Used to showcase test double usage

Asking the right questions

When trying to figure out the “what does what” with testing stacks, the following questions gives a clue on how to differentiate one concept from the next and one tool from the next:

This article is unfinished business and will be adding more content as I gather more confusing terms, or find some more interesting use cases.

  • What role does the test runner play when testing the code sample above
  • What role does the test doubles play when testing the code sample above
  • How do a test runner and test doubles stack up in this particular use case.
  • How do Stubbing differ from Mocking
  • How to Stubbing differs from Spy fakes ~ (functions with pre-programmed behavior that replace real behaviors)
  • How can we tell the function next(error, obj) has been called with — order or error arguments in our case?
  • What are other common technical terms that cause confusion around test strategies — Things such as unit test, integration test and end to end tests. Other less common, but confusing anyways, terms like smoke testing, exploratory testing, regression testing, performance testing, benchmarking, and the list goes on. We have to make an inventory of these terms(alone or in a team, and formulate a definition around each one before we adopt using them vernacular day-to-day communication).
  • What are other common technical terms that cause confusion around testing tools — which tools are more likely to cause confusion, and how do they stack up in our test cases (test double tools: mocking tools, spying tools, stubbing tools versus frameworks such as mocha chai sinon jest jasmine etc)
  • How to easily remember which is what terms around test strategies such as unit/regression/integration/smoke/etc tests
  • How to easily remember which is what terms around testing tools such as mocks/stubs/fakes/spies/etc tools.

In the test double universe, there is a clear distinction between a stub, a fake, a mock, and a spy. The stub is a drop-in replacement of a function or method we are not willing to run when testing a code block. A Mock, on the other hand, is a drop-in replacement of an object, most of the time used when data encapsulated in an object is expensive to get. The object can either be just data, or instance of a class, or both. A spy tells us when a function has been called, without necessarily replacing a function. A fake and a stub are pretty close relatives, and both replace function implementations.

Conclusion

In this blog, we explored differentiations around test strategies and testing tools. Without prescribing a solution, we let our thoughts play with possibilities around solutions and strategies that we can adapt for various use cases and projects.

References

#snippets #code #annotations #question #discuss

Testing authenticated routes sound intimidating, but the trick is simple to get it right. The right combination of mocking a session object and stubbing of the authentication middleware. This article will revisit these two key ingredients to make tests work.

In this article we will talk about:

  • Avoiding integration test trap on authenticated routes
  • Stubbing authentication middleware for faster tests
  • Mocking authentication protected routes' session data

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


// Authentication Middleware in middlwares/authenticated.js 
module.exports = function(req, res, next){
    let token = req.headers.authorization;
    let payload = jwt.decode(token, config.ssecret);
    if(!validate(payload)) return next(new Error('session expired'));
    req.user = payload.sub;//adding 
    return next();
};

//Session Object in settings/controller/get-profile  
module.exports = function(req, res, next){
    let user = req.session.user;
    UserModel.findById(user._id, (error, user) => {
        if(error) return next(error, null);
        return req.status(200).json(user); 
    });     
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getProfile = require('./settings/get-profile');
router.get('/profile/:id', authenticated, getProfile);
module.exports = router;

What can possibly go wrong?

There is a clear need to mimic the real authentication when testing expressjs authenticated routes and sometimes this need leads to an integration testing trap.

Following are other challenges we may expect along the way:

  • Avoid testing underlying libraries that provide authentication features
  • Simulate authenticated session data
  • Mock requests behind protected third-party routes, such as Payment Gateways, etc.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework. They are not a suggestion, rather the ones that made sense to complete this article:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. mocha was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete.
  • supertest framework for mocking RESTful APIs and nock for intercepting and mocking third-party HTTP requests. supertest is written on top of superagent, so we get both testing toolkits.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

The keyword in mocking a session object lies in this line, found in the example above: let user = req.session.user;. With that knowledge,


describe('getPrifile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { user: { /*...*/ } };//mocking session object
    req = { params: {id: 1234}, session: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });

});

On the other hand, since authenticated() resides on a library, it can simply be stubbed as any other function, the time comes to test the whole route: let authenticated = sinon.spy();.

Conclusion

In this article, we reviewed how testing tends to be more of an art, than science. We also stressed the fact that, like in any art, practice makes perfect.

One use case of tapping into middleware re-usability/composability and testability is the authentication middleware herein presented. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

The middleware is one of the components that improve the composability of expressjs router. This blog post approaches middleware testing from a real-world perspective. The use case is a CORS since found in almost all expressjs enabled applications.

In this article we will talk about:

  • How to mock Request/Response Objects
  • Spying if certain calls have been called
  • Make sure the requests don't leave the local machine.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The CORS middleware is one of the most used middleware in the nodejs community.

module.exports = function cors(req, res, next) {
    res.set('Access-Control-Allow-Credentials', true);
    res.set('Access-Control-Allow-Origin', '*');
    res.set('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
    res.set('Access-Control-Allow-Headers', 'X-CSRF-Token, X-CSRF-Strategy, X-Requested-With, Accept, Authorization, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version');
    res.set('Content-Type', 'application/json');
    res.set('Access-Control-Allow-Max-Age', 3600);

    return req && req.method === 'OPTIONS' ? res.send(200) : next();
};

Example: CORS middleware in lib/middleware/cors.js

Code sample is modeled from: Unit Testing Controllers the Easy Way in Express 4

What can possibly go wrong?

As is the case for routers, the following points may be other challenges when unit testing expressjs middleware:

  • Mock database read/write operations for a middleware that reads/writes from/to a database
  • Mocking read/write from/to third-party services to avoid integration testing trap

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our tiny Choosing the right tools framework, the following tools make sense in a context of this blog, when testing expressjs routes middleware:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for examples sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.

The testing stack mocha, chai and sinon worths a shot for most use cases.

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR $ nyc test mocha -- --color --reporter mocha-lcov-reporter specs

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Show me the tests

Have you ever wondered where to start from, when refactoring a code block? That is a common source of frustration and the bad decision-making that generally follows. When paying off technical debt, small bad moves can build up into catastrophe, such as having unexpected downtime with little to no failure traceability.

This blog post approaches testing of fairly large nodejs application from a real-world perspective and with refactoring in mind.

The mainstream philosophy about automated testing is to write failing tests, followed by code that resolves the failing use cases. In the real world, a writing test should start as it should follow writing code. A particular case is when dealing with untested code.

var sinon = require('sinon'), 
    chai = require('chai'), 
    expect = chai.expect, 
    cors = require('./middleware').cors, 
    req, 
    res, 
    next;
   
describe("cors()", function() {
    before(function(){
        req = {}, 
        res = { send: sinon.spy()}, 
        next = sinon.spy();
    });

    it("should skip preflight requests", function() {
        req = {method: 'OPTIONS'};//preflight requests have method === options
        cors(req, res, next);
        expect(res.send.calledOnce).to.equal(true); 
        res.send.restore();
    });     

    it('should decorate requests with CORS permissions', function() => {
        cors(req, res, next);
        expect(next.calledOnce).to.equal(true); 
        next.restore();
    });
});

Example:

Special Use Case: How to mock a response that will be used with a Streaming Source.

It worths mentioning that mocking a request object is not rocket science. An empty object, with the right methods we use in a given test, is sufficient enough to assert whether areas of our interest are covered.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike.

In this article, we reviewed how testing tends to be more of an art, than science. We also stressed the fact that, like in any art, practice makes perfect. One way this idea may be reflected in real life is by testing middleware as an isolated reusable, composable component that the middleware constitutes. Writing a good meaningful testing message is pure art.

There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss