Simple Engineering

snippets

Modularization of redis for testability

To take advantage of multicore systems, nodejs — being a single-threaded JavaScript runtime — spins up multiple processes to guarantee parallel processing capabilities. That works well until inter-process communication becomes an issue.

That is where key-stores such as redis come into the picture, to solve the inter-process communication problem while enhancing real-time experience.

This article is about showcasing how to achieve leverage modular design to provide testable and scalable code.

In this article we will talk about:

  • How to modularize redis clients for reusability
  • How to modularize redis clients for testability
  • How to modularize redis clients for composability
  • The need to have a redis powered pub/sub
  • Techniques to modularize redis powered pub/sub
  • The need to coupling WebSocket with redis pub/subsystem
  • How to modularize WebSocket redis communications
  • How to modularize redis configuration

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

Introducing extra components makes it hard to test a system in isolation. This example highlights some of the moving parts we will be discussing in this article:

//creating the Server -- alternative #1 
var app = express();
var server = Server(app);

//creating the Server -- alternative #2
var express = require('express'),
    app = express(),
    server = require('http').createServer(app);

//Initialization of WebSocket Server + Redis Pub/Sub    
var wss = require("socket.io")(server),
	redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
  sub = redis.createClient(rport, rhost);
  
//HTTP session middleware thing
function middleware(req, res, next){
 ...
 next();
}

//exchanging session values 
wss.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
wss.sockets.on("connection", function(socket) {
 
 //socket.request.session 
 //Now it's available from Socket.IO sockets too! Win!
 socket.on('message', (event) => {
	 var payload = JSON.parse(event.payload || event),
	 	user = socket.handshake.user || false;
	 
	 //except when coming from pub  			
	 pub.publish(payload.conversation, payload)); 
 });

 //redis listener
 sub.on('message', function(channel, event) {
	var payload = JSON.parse(event.payload || event),
		user = socket.handshake.user || false;
    wss.
      sockets.
      in(payload.conversation).
      emit('message', payload);
 });

Example:

What can possibly go wrong?

  • Having redis.createClient() everywhere, makes it hard to mock
  • creation/deletion of redis instances(pub/sub) is out of control

One way is to create One instance (preferably while loading top-level module), and inject that instance into dependent modules – Managing modularity and redis connections in nodejs. – The other way: node module loader caches loaded modules. Which provides a singleton by default.

The need to have a redis powered pub/sub

JavaScript, and nodejs in particular, is a single-threaded language — but has other ways to provide parallel computing.

It is possible to spin up any number of processes depending on application needs. The process to process communication becomes an issue, and when one process mutates the state of a shared object, for instance, any other process on the same server would have to be informed about the update.

Unfortunately, that is not feasible. pub/sub mechanisms that redis brings to the table, make it possible to solve problems similar to this one.

How to modularize redis clients for testability

pub/sub implementations make the code intimidating, especially when the time comes to test.

We assume that existing code has little to no test, and most importantly, not modularized. Or well tested, and well modularized, but the addition of real-time handling adds a need to leverage pub/sub to provide near real-time experience.

The first and easy thing to do in such a scenario is to break code blocks into smaller chunks that we can test in isolation.

  • In essence, the pub and sub are both redis clients, that have to be created independently so that they run in two separate contexts and processes. We may be tempted to use pub and sub-objects as the same client, that would be detrimental and create race conditions from the get-go.
  • Delegating pub/sub-creation to a utility function makes it possible to mock the clients.
  • The utility function should accept injected redis. It is possible to go the extra mile and delegate redis instance initialization in its own factory. That way, it becomes even easier to mock the redis instance itself.

Past these steps, other refactoring techniques can take over.

// hard to mock when located in [root]/index.js  
var redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
  sub = redis.createClient(rport, rhost);

// Easy to mock with introduction of createClient factory
// in /lib/util/redis.js|redis-helper.js
module.exports = function(redis){
    return redis.createClient(port, host);
}

How to modularize redis clients for reusability

The example provided in this article scratches the surface on what can be achieved when integrating redis into a project.

What would be the chain of events if, for some reason, redis server goes down. Would that affect the overall health and usability of the whole application?

If the answer is yes, or not sure, that gives a pretty good indication of the need to isolate usage of redis, and make sure its modularity is sound and failure-proof.

Modularization of the redis can be seen from two angles: to publish a set of events to the shared store, subscribing to the shared store for updates on events of our interest.

By making the redis integration modular, we also have to think about making sure redis server downtime/failure, does not translate into a cascading effect that may bring the application down.

//in app|server|index.js   
var client = require("redis").createClient(); 
var app = require("./lib")(client);//<- Injection

//injecting redis into a route
var createClient = require('./lib/util/redis');
module.exports = function(redis){
  return function(req, res, next){
    var redisClient = createClient(redis);
    return res.status(200).json({message: 'About Issues'});
  };
};

//usage
var getMessage = require('./')(redis);

How to modularize redis clients for composability

In the previous two sections, we have seen how pub/sub enhanced by a redis server brings near real-time experience to the program.

The problem we faced in both sections, is that redis is tightly coupled to all modules, even those that do not need to use it.

Composability becomes an issue when we need to avoid having a single point of failure in the program, as well as providing a test coverage deep enough to prevent common use cases of failures.

// in /lib/util/redis
const redis = require('redis');
module.exports = function(options){
  return options ?  {} : redis;
}

The above small factory may look a little weird, but it makes it possible to offset initialization to a third-party service and becomes possible to mock when testing.

Techniques to modularize redis powered pub/sub

The need to modularize the pub/sub code has been discussed in previous segments.

The issue we still have at this time is at pub/sub handler level. As we may have noticed already, testing pub/sub handlers is challenging especially when not having an up and running redis instance.

Modularizing that two kinds of handlers provide an opportunity to test pub/sub handlers in isolation. It also makes it possible to share the handlers with other systems that may need exactly the same kind of behavior.

The need to lose coupling WebSocket with redis pub/sub system

One example of decoupling pub/subfrom redis and make its handlers re-usable, can be seen when the WebSocket server has to leverage socket server events.

For example, on a new message read on the socket, the socket server should notify other processes that there is in fact a new message on the socket.

The pub is the right place to post this kind of notification. On a new message posted in the store, the WebSocket server may need to respond to a particular user. and so forth.

How to modularize WebSocket redis communications

There is a use case where an infinite same message can be ping-pong-ed between pub and sub.

To make sure such a thing doesn't happen, a communication protocol should be initialized. For example, when a message is published to the store by a WebSocket and the message is destined to all participating processes, a corresponding listener should read from the store and forward the message to all participating sockets, In such a way a socket that receives the message simply publishes it but does not answer to the sender right away.

Subscribed sockets, can then read from the store, and forward the message to the right receiver.

There is an entire blog dedicated to modularizing nodejs WebSockets here

How modularize redis configuration

The need to configure a server comes not only for redis server but also for any other server or service.

In this particular instance, we will see how we can include redis configuration into an independent module that can then be used with the rest of the configurations.

//from the example above 
const redis = require("redis"); 
const port = process.ENV.REDIS_PORT || "6379";
const host = process.ENV.REDIS_HOST || "127.0.0.1";
module.exports = redis.createClient(port, host);

//abstracting configurations in lib/configs
module.exports = Object.freeze({ 
  redis: {
    port: process.ENV.REDIS_PORT || "6379",
    port: process.ENV.REDIS_HOST || "127.0.0.1"
  }
});

//using an abstracted configurations
const configs = require('./lib/configs');
module.exports = redis.createClient(
  configs.redis.port, 
  configs.redis.host
);

This strategy to rethink, application structure has been found here

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings not only elegance but makes copy/paste detectors happy, and at the same time improves both performance and testability.

In this article, we revisited how to aggregate WebSocket code into composable and testable modules. The need to group related tasks into modules involves the ability to add support of Pub/Sub on demand and using various solutions as project requirements evolve. There are additional complimentary materials in the “Testing nodejs applications” book.

References + Reading List

tags: #snippets #redis #nodejs #modularization

Is it possible to use one instance of nginx, to serve as a reverse proxy of multiple application servers running on dedicated different IP addresses, under the same domain umbrella?

This article is about pointing in direction on how to achieve that.

Spoiler: It is possible to run nginx server both as a reverse proxy and load balancer.

In this article we will talk about:

  • Configure nginx as a nodejs reverse-proxy server
  • Proxy multiple IP addresses under the same banner: load balancer

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Installation

The magic happens at the upstream nodeapps section. This configuration plays the gateway role and makes public a server that was otherwise private.


upstream webupstreams{
  # Directs to the process with least number of connections.
  least_conn;
  server 127.0.0.1:8080 max_fails=0 fail_timeout=10s;
  server localhost:8080 max_fails=0 fail_timeout=10s;

  server 127.0.0.1:2368 max_fails=0 fail_timeout=10s;
  server localhost:2368 max_fails=0 fail_timeout=10s;
  keepalive 512;

  keepalive 512;
}

server {
  listen 80;
  server_name app.website.tld;
  client_max_body_size 16M;
  keepalive_timeout 10;

  # Make site accessible from http://localhost/
  root /var/www/[app-name]/app;
  location / {
    proxy_pass http://webupstreams;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
  }
}
server {
    listen 80;
    server_name blog.website.tld;
    access_log /var/log/blog.website.tld/logs.log;
    root /var/www/[cms-root-folder|ghost|etc.]

    location / {
        proxy_pass http://webupstreams;
        #proxy_http_version 1.1;
        #proxy_pass http://127.0.0.1:2368;
        #proxy_redirect off;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
    }
}

Example: Typical nginx configuration at /etc/nginx/sites-available/app-name

This article is an excerpt from “How to configure nginx as a nodejs application proxy server” article.

Conclusion

In this article, we revisited how to proxy multiple servers via one nginx instance, or nginx load-balancer for short. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

This article is going to explore how to deploy a nodejs application on a traditional linux server — in a non-cloud environment. Even though the use case is Ubuntu, any Linux distro or mac would work perfectly fine.

For information on deploying on non-traditional servers, read: “Deploying nodejs applications”. For zero-downtime knowledge, read “How to achieve zero downtime deployment with nodejs

In this article we will talk about:

  • Preparing nodejs deployable releases
  • Configuring nodejs deployment environment
  • Deploying nodejs application on bare metal Ubuntu server
  • Switching on nodejs application to be available to the world ~ adding nginx for the reverse proxy to make the application available to the world
  • post-deployment support — production support

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book book, the content can help any software developer to level up their knowledge. You may use this link to buy the book. Testing nodejs Applications Book Cover

Preparing a deployable release

There are several angles to look at release and deployment from. There are also several ways to release nodejs code, npm and tar for instance, and that depending on the environment in which the code is designed to run. Amongst environments, server-side, universal, or command line are classic examples.

In addition, we have to take a look from a dependency management perspective. Managing dependencies at deployment time has two vectors to take into account: whether the deployment happens on online or offline.

For the code to be prepared for release, it has to be packaged. Two methods of packaging nodejs software, amongst other things, are managed packaging and bundling. More on this is discussed here

As a plus, versioning should be taken into consideration when preparing a deployable release. The versioning that is a little common circulation is SemVer.

Configuring deployment environment

Before we dive into deployment challenges, let's look at key software and configuration requirements.

As usual, first-time work can be hard to do. But the yield should be then predictable, flexible to improvement, and capable of being built-upon for future deployments. For context, the deployment environment we are talking about in this section is the production environment.

Two key configurations are an nginx reverse proxy server and nodejs. But, “Why coupling nodejs server to an nginx reverse proxy server”? The answer to this question is two folds. First, both nodejs and nginx are single-threaded non-blocking reactive systems. Second, the wide adoption of these two tools by the developer community makes it an easy choice, from both influence and availability of collective knowledge the developer community shares via forums/blogs and popular QA sites.

How to install nginx server ~ [there is an article dedicated to this](). How to configure nginx as a nodejs application proxy server ~ there is an article dedicated to this.

Additional tools to install and configure may include: mongod database server, redis server, monit for monitoring, upstart for enhancing the init system.

There is a need to better understand the tools required to run nodejs application. It is also our responsibility as developers to have a basic understanding of each tool and the roles it plays in our project, in order to figure out how to configure each tool.

Download source code

Starting from the utility perspective, there is quite a collection of tools that are required to run on the server, alongside our nodejs application. Such software needs to be installed, and updated ~ for patch releases, and upgraded ~ to new major versions to keep the system secure and capable(bug-free/enhanced with new features).

From the packaging perspective, both supporting tools and nodejs applications adhere to a packaging strategy that makes it easy to deploy. When packaging is indeed a bundle, wget/curl can be used to download binaries. When dealing with discoverable packages, npm/yarn/brew can also be used to download our application and its dependencies. Both operation yield same outcomes, which is un-packaging, configuration and installation.

To deploy versioned nodejs application on bare metal Ubuntu server ~ understanding file system tweaks such as symlink-ing for faster deployments can save time for future deployments.

#first time on server side  
$ apt-get update
$ apt-get install git

#updating|upgrading server side code
$ apt-get update
$ apt-get upgrade
$ brew upgrade 
$ npm upgrade 

# Package download and installs 
$ /bin/bash -c "$(curl -fsSL https://url.tld/version/install.sh)"
$ wget -O - https://url.tld/version/install.sh | bash

# Discoverable packages 
$ npm install application@next 
$ yarn add application@next 
$ brew install application

_Example: _

The command above can be automated via a scheduled task. Both npm and yarn support the installation of applications bundled in a .tar file. See an example of a simple script source. We have to be mindful to clean up download directories, to save disk space.

Switching on the application

It sounds repetitive, but running npm start does not guarantee the application to be visible outside the metal server box the application is hosted on. This magic belongs to the nginx reverse proxy we were referring to in earlier paragraphs.

A typical nodejs application needs to start one or more of the following services, each time to reboot the application.

# symlinking new version to default application path
$ ln -sfn /var/www/new/version/appname /var/www/appname 

$ service nginx restart #nginx|apache server
$ service redis restart #redis server
$ service restart mongod #database server in some cases
$ service appname restart #application itself

Example:

PS: Above services are managed with uptime

Adding nginx reverse proxy makes the application available to the outside world. Switching on/off the application can be summarized in one command: service nginx stop. Likewise, to switch off and back on can be issued in one command: service nginx restart.

Post-deployment support

Asynchronous timely tasks can be used to resolve a wide range of issues. Background tasks such as fetching updates from third-party data providers, system health check, and notifications, automated software updates, database cleaning, cache busting, scheduled expensive/CPU intensive batch processing jobs just to name a few.

It is possible to leverage existing asynchronous timely OS-provided tasks processing infrastructure to achieve any of the named use cases, as it is true for third-party tools to do exactly the same job.

Rule of thumb

The following is a mental model that can be applied to the common use cases of releases. It may be basic for DevOps professionals, but useful enough for developers doing some operations work as well.

  • Prepare deployable releases
  • Update and install binaries ~ using apt, brew etc.
  • Download binaries ~ using git,wget, curl or brew
  • symlink directories(/log, /config, /app)
  • Restart servers and services ~ redis, nginx, mongodb and app
  • When something goes bad ~ walk two steps back. That is our rollback strategy.

This model can be refined, to make most of these tasks repeatable and automated, deployments included.

Conclusion

In this article, we revisited quick easy, and most basic nodejs deployment strategies. We also revisited how to expose the deployed applications to the world using nginx as a reverse proxy server. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

This blog post highlights key points to consider when setting up a nodejs application workflow.

In this article we will talk about:

  • Key workflow that requires automation
  • Automating workflow using npm
  • Automating workflow using gulp
  • Inter-operable workflow using npm and gulp
  • Other tools: nx
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Key automation opportunities

Automation opportunities are grouped around tasks that tend to be manually repeated, over the course of the development lifecycle. Some of those opportunities are, but not limited to:

  • hot reloading the server after updating some source files
  • automatically executing tests after source/test code change
  • pre-commit lint/test/cleaning hooks

To name a few. There are two major workflow automation tools that are discussed in this article, but the process will be applicable to any tool the reader wishes to pick. Those tools are, but not limited to npm, yarn, gulp — and husky for git hooks.

Hot reload can be achieved using one of the following tools: nodemon, supervisor or forever. The choice of tools does not end here, as there is always something cooking in the community. To start a server in watch mode, instead of starting the server as: node server.js, we can use instead supervisor server.js. Later in the following sections, we will see how we can move this feature from the command line to npm scripts, or even to gulp tasks runner helper.

Workflow with npm

There are various issues related to relying on npm package globally installed on one system. Some of those issues are exposed when code changes hands and runs on another platform: deployment server, CI server, or even a developer computer other than ours. The npm package version provided globally, may not be the npm package version required by the project at hand. There is no indication to tell npm to use local package A instead of globally available package B. To eliminate that ambiguity, taking preference on all modules local to the project makes sense.

How to manage globally installed devDependencies ~ StackOverflow Question. How to Solve the Global npm Module Dependency Problem

Workflow with gulp

Running this to a remote server requires installing manually a global version of gulp. Many applications, may require a different gulp version. Typical gulp installation:

$ npm install gulp -g     #provides gulp `cli` globally
$ npm install gulp --save #provides gulp locally 

Since some applications may require a different version of gulp, adding gulp in package.json as in the following example makes sure the locally sourced gulp is run.

"scripts": {
  "gulp": "./node_modules/.bin/gulp"  
}

Example: equivalent to gulp when installed globally

Use case: running mocha test with npm

This section highlights important steps to get tests up and running. Examples provided here cover single runs, as well as watch mode.

While searching for a task runner, stability ease of use, and reporting capabilities come first. Even though mocha is easy to get started, other tools such as jasmine-node/ava or jest can do a pretty good job at a testing node as well. They worth giving a try.

supertest is a testing utility wrapper of superagent. It is useful when testing endpoints of REST API in end-to-end/contract/integration test scenarios. However, when working on unit tests, for that reason there is a need to intercept HTTP requests, mocking tools such as nock HTTP mocking framework deserves a chance.

Starting with command line test runner instructions, gives a pretty good baseline and an idea of how the npm script may end-up looking like. The following example, showcases how to run tests in watch mode, while instrumenting a select set of source code files for reporting purposes:

$ ./node_modules/.bin/istanbul cover \
    --dir ./test/coverage -i 'lib/**' \
    ./node_modules/.bin/_mocha -- --reporter \
    spec  test/**/*spec.js

istanbul is a reporting tool and will be used to generate reports, as tests progress.

# in package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# then run the tests using 

In case that code works just fine, we can go ahead and add it in the scripts section of package.json, and that will be enough to execute the test runner command from npm

There are additional features that make this setup a little more hectic to work with. Even though mocha is the choice of this blog, jest is also a pretty good alternative to the test node too.

{
  "test": "mocha -R spec  test/**/*spec.js",
  "test:compile": "mocha -R spec --compilers js:babel/register test/**/*spec.js",
  "watch": "npm test -- --watch",
  //Adding istanbul coverage + local istanbul + local mocha 
  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js",
  //./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js =>produces<= "No coverage information was collected, exit without writing coverage information" 
}

When using istanbul cover mocha – Error: “No coverage information was collected, exit without writing coverage information” may be displayed. To avoid this error, the use of istanbul cover _mocha can make reporting available at the end of test execution.

Once the npm scripts are in place, we can leverage the command line once again, but this time using a smaller version of the command. We have to keep in mind that, most environments have to have npm available globally.

$ npm test --coverage
$ npm run test:watch
$ npm run test:compile

Use case: running mocha test with gulp

$ npm run gulp will use scripts > gulp version. The reason for using gulp while testing, is to have a smaller package.json scripts section. The tasks have to be written in ./gulpfile.js, and requires gulp plugins to work. gulp tasks can also take on more complex custom tasks such as deployment from the local machine, codemod and other various tasks using projects that do not have a cli tool yet.

Conclusion

In this article, we revisited key points to set up a nodejs workflow. The workflow explored goes from writing code to automated tasks such as linting, testing, and release. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #workflow #npm #nx

One of the reasons nodejs applications slow is lacking accountability when it comes to managing memory. We normally defer that memory management tasks the garbage collection. That is an answer to a couple of issues, that turned out to also to be a problem.

This blog takes a different approach, and only states facts about key memory hogs operations, and provides quick fixes, whenever there is, without going into too many details — or references.

In this article we will talk about:

  • Identifying memory leak issues.
  • Tracing nodejs application memory issues
  • Cleaning nodejs long-lasting objects
  • Production grade memory leak detection tools

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Memory Leak

Managing memory can be a daunting task in nodejs environment. Some strategies to detect and correct memory leaks can be found in the following articles.

This article is unfinished business and will be adding more content as I experience memory leak problems, or find some interesting use cases off Github and StackOverflow.

Conclusion

In this article, we focused on identifying potential sources of memory leaks in nodejs applications and provided early detection mechanisms so that the same problem cannot happen again. There are additional complimentary materials on memory management in the “Testing nodejs Applications book” book.

#snippets #performance #nodejs #memory-leak

The reactive aspect of nodejs applications is synonymous with the nodejs runtime itself. Even though the real-time aspect may be attributed to WebSocket implementation, the realtime reactive aspect of nodejs applications heavily rely on pub/sub mechanisms — most of the time backed by datastore engines like redis. This article explores how to integrate redis datastore into a nodejs application.

In this article we will talk about:

  • redis support with and without expressjs
  • redis support with and without WebSocket push mechanism
  • Alternatives to redis in nodejs world and beyond

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The following code sample showcase how nodejs/redis integration can be achieved. It demonstrates that it is still possible to share sessions via a middleware between socket.io and expressjs.

var app = express();
var server = Server(app);
var sio = require("socket.io")(server),
	redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
	sub = redis.createClient(rport, rhost);


function middleware(req, res, next){
 //session initialization thing
 next();
}

//socket.io/expressjs session sharing middleware
sio.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
sio.sockets.on("connection", function(socket) {
 
 //socket.request.session 
 //Now it's available from `socket.io` sockets too! Win!
 socket.on('message', (event) => {
	 var payload = JSON.parse(event.payload || event),
	 	user = socket.handshake.user || false;
	 
	 //except when coming from pub  			
	 pub.publish(payload.conversation, payload)); 
 });

 //redis listener
 sub.on('message', function(channel, event) {
	var payload = JSON.parse(event.payload || event),
		user = socket.handshake.user || false;
	sio.sockets.in(payload.conversation).emit('message', payload);
 });

Example: excerpt source: StackOverflow

What can possibly go wrong?

When trying to figure out how to approach redis datastore integration into nodejs application for inter-process communication and real-time feature, the following points may be a challenge:

  • How to decouple the WebSocket events from the redis specific (pub/sub) events. We should be able to decouple, but still provide an environment where interoperability is possible at any time.
  • How to make integration modular, testable, and overall friendly to the rest of the application ecosystem

When testing this implementation, we should expect the additional challenge to emerge:

  • The redis client instances (pub/sub) are created as soon as the library loads, and a redis server should be up and running by that time. The issue is when testing the application, there should be no server or any other system dependency hindering the application from being tested.
  • getting rid of redis server with a drop-in-replacement, or stubs/mocks, is more of a dream than reality ~ hard but feasible.

There is additional information in mocking and stubbing redis data store in “How to Mock redis datastore” article.

Conclusion

In this article, we revisited how to enhance nodejs application with redis based pub/sub mechanism, critical to having a reactive real-time experience. The use of WebSocket and Pub/Sub powered by a key/value data store was especially the main focus of this article. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #integration #redis

The reactive aspect of nodejs applications is synonymous with the nodejs runtime itself. However, the real-time magic is attributed to the WebSocket addition. This article introduces how to integrate WebSocket support within an existing nodejs application.

In this article we will talk about:

  • WebSocket support with or without socket.io
  • WebSocket support with or without expressjs
  • Modularizations of WebSocket

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

Express routes use socket.io instance to deliver messages of a socket/socket.io enabled application looks like following:

//module/socket.js - server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', fn); 
  io.on('disconnect',fn);
};
//OR
//module/socket.js 
var io = require('socket.io');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

//module/routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//in server.js 
var app = require('express').express(),
  server = require('http').createServer(app),
  sio = require('module/socket.js')(server);

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between SocketIO and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

//application app.js|server.js initialization, etc. 
require('module/routes')(server); ;               

What can possibly go wrong?

When working in this kind of environment, we will find these two points to be of interest, if not challenging:

  • For socket.io application to use same expressjs server instance, or sharing route instance with socket.io server
  • Sharing session between socket.io and expressjs application

Conclusion

In this article, we revisited how to add real-time experience to a nodejs application. The use of WebSocket and Pub/Sub powered by a key/value data store was especially the main focus of this article. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #integration #WebSocket

Building, testing, deploying, and maintaining large-scale applications is challenging in many ways. It takes discipline, structure, and rock-solid processes to succeed with production-ready nodejs applications. This document put together a collection of ideas and tribulations from a personal perspective so that you can avoid some mistakes and succeed with your project.

In this article we will talk about:

  • Avoiding integration test trap
  • Mocking strategically or applying code re-usability to mocks
  • Achieve a healthy test coverage

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Why

There is a lot of discussion around Unit testing. Developers don't like testing, only the testing approach differs from one person to another, from one system to the next one. It is also worth highlighting that some, if not the majority, skip TDD's way of doing business for alternatives. One thing is clear: You cannot guarantee the sanity of a piece of code unless it is tested. The “HOW” may be the problem we have to figure out and fix. In other words, should a test be carried out before or after writing the code?

Pro: Tests(Unit tests)

  • Increases confidence when releasing new versions.
  • Increases confidence when changing the code, such as during refactoring exercises.
  • Increase overall code health, and reduces bug count
  • Helps new developers to the project better understand the code

Cons:

  • Takes time to write, refactor and maintain
  • Increases codebase learning curve

What

There is a consensus that every feature should be tested before landing in a production environment. Since tests tend to be repetitive, and time-consuming, it makes sense to automate the majority of the tests, if not all. Automation makes it feasible to run regression tests on quite a large codebase and tends to be more accurate and effective than manually testing alone.

Layers that require particular attention while testing are:

  • Unit test controllers
  • Unit test business logic(services) and domain (models)
  • Utility library
  • Server start/stop/restart and anything in between those states
  • Testing routes(integration testing)
  • Testing secured routes

Questions we should keep in our mind while testing is: How to create good test cases? (Case > Feature > Expectations ) and How to unit test controllers, and avoid test to be integration tests

The beauty of having nodejs, or JavaScript in general, is that to some extent, some test cases can be reusable for back-end and for front-end code as well. Like any other component/module, unit test code should be refactored for better structure, readability than the rest of the codebase.

Choosing Testing frameworks

For those who bought in the idea of having a TDD way of doing business, here are a couple of things to consider when choosing a testing framework:

  • Learning curve
  • How easy to integrate into project/existing testing frameworks
  • How long does it take to debug testing code
  • How good is the documentation
  • How good is the community backing the testing framework, and how well the library happens to be maintained
  • How test doubles (Spies, Mocking, Coverage reports, etc) work within the framework. Third-party test doubles tend to beat framework native test doubles.

Conclusion

In this article, we revisited high-level objectives when testing a nodejs application deployable at scale. There are additional complimentary materials in the “Testing nodejs applications” book that dives deeper into integration testing trap and how to avoid it, how to achieve a healthy code coverage without breaking the piggy bank, as well as some thoughts on mocking strategically.

References

#snippets #code #annotations #question #discuss

This post highlights snapshots on best practices/hacks, to code, test, deploy and to maintain large-scale nodejs apps. It provides big lines on what became a book on testing nodejs applications.

If you haven't yet, read the How to make nodejs applications modular article. This article is an overall follow-up.

Like some of the articles that came before this one, we are going to focus on a simple question as our north star: What are the most important questions developers have when testing a nodejs application? When possible a quick answer will be provided, else we will point in the right direction where information can be found.

In this article we will talk about:

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express'),
  app = express(),
  server = require('http').createServer(app);
//...
require('./config');
require('./utils/mongodb');
require('./utils/middleware')(app);
require('./routes')(app);
require('./realtime')(app, server)
//...
module.exports.server = server; 

Example:

The code provided here is a recap of How to make nodejs applications modular article. You may need to give it a test drive, as this section highlights an already modularized example.

Testing

Automation is what developers do for a living. Manual testing is tedious, repetitive, and those are two key characteristics of things we love automating. Automated testing is quite intimidating for newbies and veterans alike. Testing tends to be more of an art, the more you practice, the better you hone your craft.

In the blogosphere, – My node Test Strategy ~ RSharper Blog. – nodejs testing essentials

BDD versus TDD

Why should we even test

Testing is unanimous within the developers community, the question always is around how to go about testing.

There is a discussion mentioned in the first chapter between @kentbeck, @martinfowler and @dhh that made rounds on social media, blogs and finally as a subject of reflection in the community. When dealing with legacy code, there should be a balance and only adopt tdd as one tool in our toolbox.

In the book we do the following exercise alternative to classic tdd: read, analyze, modify if necessary, rinse and repeat. We cut the bullshit, and get to test whatever needs to be tested, and let nature take its course.

One thing is clear: We cannot guarantee the sanity of a piece of code unless it is tested. The remaining question is on “How” to go about testing.

There is a summary of the discussions mentioned earlier, titled Is TDD Dead?. In the blogosphere, – BDD-TDD ~ RobotLovesYou Blog. – My node Test Strategy ~ RSharper Blog – A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials

What should be tested

Before we dive into it, lets re-examine pros and cons of automated tests — in the current case, Unit Tests.

Pros:

  • Steer release confidence
  • Prevents common use case and unexpected bugs
  • Help project's new developers better understand code
  • Improves confidence when refactoring code
  • Well tested product guarantees improves customer experience

Cons:

  • Take time to write
  • Increase learning curve

At this point, if we agree that the pros outweigh the cons, we can set an ideal of testing everything. Those are features of a product or functions of code. Re-testing large applications manually are daunting, exhausting, and sometimes simply not feasible.

The good way to think about testing is not by thinking in terms of layers(controllers, models, etc.). Layers tend to be bigger. It is better to think in terms of something much smaller like a function(TDD way) or a feature(BDD way).

Brief, every controller/business logic/utility libraries/nodejs servers/routes all features are also set to be tested ahead of release.

There is an article on this blog that gives more insight on — How to create good test cases (Case > Feature > Expectations | GivenWhenThen) — titled “How to write test cases developers will love reading”. In the blogosphere, – Getting started with nodejs and mocha

Choosing the right testing tools

There is no shortage of tools in nodejs community. The problem is analysis paralysis. Whenever the time comes to choose testing tools, there are layers that should be taken into account: test runners, test doubles, reporting, and eventually, if there is any compiler that needs to be added in the mix.

Other than that, there is a list of a few things to consider when choosing a testing framework: – Learning curve – How easy to integrate into project/existing testing frameworks – How long does it take to debug testing code – Choice of the testing framework, and other testing tools consider – How good is documentation – How big is the community, and how good is the library maintained – What is may solve faster(Spies, Mocking, Coverage reports, etc) – Instrumentation and test reporting, just to name a few.

There are sections dedicated to providing hints and suggestions throughout the book. There is also this article “How to choose the right tools” on this blog that gives a baseline framework to choose, not only for testing frameworks but any tool. Finally, In the blogosphere, – jasmine vs. mocha, chai and sinon. – Evan Hahn has pretty good examples of the use of test doubles in How do I jasmine blog post. – Getting started with nodejs and jasmine – has some pretty amazing examples, and is simple to start with. – Testing expressjs REST APIs with Mocha

Testing servers

The not-so-obvious part when testing servers is how to simulation of starting and stopping the server. These two operations should not bootstrap dependent servers(database, data-stores) or make side effects(network requests, writing to files) to reduce the risk associated with running an actual server.

There is a chapter dedicated to testing servers in the book. There is also this [article on this blog that can give more insights](). In the blogosphere, – How to correctly unit test express server – There is a better code structure organization, that makes it easy to test and get good test coverage on “Testing nodejs with mocha”. – How to correctly unit test express server

Testing modules

Testing modules is not that different from testing a function, or a class. When we start looking at this from this angle, things will be a little easy.

The grain of salt: a module that is not directly a core component of our application, should be left alone and mocked out entirely when possible. This way we keep things isolated.

There are dedicated sections in every chapter about modularization, as well as a chapter dedicated to testing utility libraries(modules) in the book. There is also an entire series of articles — a more theoretical: “How to make nodejs applications modular and a more technical: “How to modularize nodejs applications” — on this blog modularization techniques. In the blogosphere, – Export This: Interface Design Patterns for nodejs Modules Alon Salant, CEO of Good Eggs and nodejs module patterns using simple examples by Darren DeRiderHow to modularize your Chat Application

Testing routes

Challenges while testing expressjs Routes

Some of the challenges associated with testing routes are testing authenticated routes, mocking requests, mocking responses as well as testing routes in isolation without a need to spin up a server. When testing routes, it is easy to fall into integration testing trap, either for simplicity or for lack of motivation to dig deeper.

Integration testing trap is When a developer confuses integration test(or E2E) with unit test, and vice versa. The success of a balanced test coverage identifies sooner the king of tests adequate for a given context, what percentage of each kind of tests.

For a test to be a unit test in route testing context, there will be – Focus to test code block(function, class, etc), not the output of a route – Mock requests to third party systems(Payment Gateway, Email Systems, etc) – Mock database read/write operations – Test worst-case scenario such as missing data and data-structure

There is a chapter dedicated to testing models in the book. There is also this article “Testing expressjs Routes” on this blog that gives more insight on the subject. In the blogosphere – A TDD approach to building a todo API using nodejs and mongodb – Marcus on supertest ~ Marcus Soft Blog

Testing controllers

When modularizing route handlers, there is a realization that they may also be grouped into a layer of their own, or event classes. In MVC jargon, this layer is also known as the controller layer.

Challenges testing controllers, by no surprise, are the same when testing expressjs route handlers. The controller layer thrives when there is a service layer. Mocking database read/write operations, or service layers, that is not core/critical to validation of the controller's expectations are some of such challenges.

Mocking controller request/response objects, and when necessary, some middleware functions.

There is a chapter dedicated to testing controllers in the book. There is also this article Testing nodejs controllers with expressjs framework on this blog that gives more insight on the subject. In the blogosphere, – This article covers Mocking Responses, etc — How to test express controllers.

Testing services

There are some instances where adding a service layer makes sense.

One of those instances is when an application has a collection of single functions under utility(utils). Chances are some of the functions under the utility umbrella may be related in terms of features, the functionality they offer, or both. Such functions are good to use case to be grouped under a class: service

Another good example is for applications that heavily use the model. Chances are the same functions can be re-used in multiple instances, and fixing an issue involves multiple places to fix as well. When that is the case, such functions can be grouped under one banner, in such a way that an update to one function, gets reflected in every instance where the function has been used.

From these two use cases, the testing service has no one-size fit-all testing strategy. Every case of service should be dealt with depending on the context it is operating in.

There is a chapter dedicated to testing services in the book. In the blogosphere, – “Building Structured Backends with nodejs and HexNut” by Francis Stokes ~ aka @fstokesman on Twitter source ...

Testing middleware

The middleware in a sense are hooks that intercept, process and forward the result to the rest of the route in the expressjs (connectjs) jargon. It is by no surprise that testing middleware shares the same challenges as testing route handlers and controllers.

There is a chapter dedicated to testing middleware in the book. There is also this article “Testing expressjs Middleware” on this blog that gives more insight on the subject. In the blogosphere, – How to test expressjs controllers

Testing asynchronous code

Asynchronous code is a wide subject in nodejs community. Things ranging from regular callbacks, promises, async/await constructs, streams, and event streams(reactive) are all under an asynchronous umbrella.

Challenges associated with asynchronous testing, depending on the use case and context at hand. However, there are striking similarities say, testing testing async/await versus a promise.

When an object is available, it makes sense to get a hold on it, execute assertions once it resolves. That is feasible for promises, streams, async/await construct. However, when the object is some kind of event, then the hold on the object can be used to add a listener and assert once the listener is resolved.

There are multiple chapters dedicated to testing asynchronous code in the book. There are also multiple article on this blog that gives more insight on the subject such as – “How to stub a stream function”“How to Stub Promise Function and Mock Resolved Output”“Testing nodejs streams”. In the blogosphere, – []()

Testing models

testing models goes hand in hand with mocking database access functions

Functions that access or change database state can be replaced by spy fakes, custom function replacements capable to supply|emulate similar results as replaced functions.

sinon may not make unanimity, but is a feature-complete battle-tested test double library, amongst many others to choose from.

There is a chapter dedicated to testing models in the book. There is also this article []() on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose modelsstubbing mongoose model question and answers on StackOverflow – Mocking database calls by wrapping mongoose with mockgoose

Testing WebSockets

Some of the challenges testing WebSockets can be summarized as trying to simulate: – sending and receiving a message on the WebSocket endpoint.

There is a chapter dedicated to testing WebSockets in the book. There is also this article on this blog that can give more ideas on how to go about testing WebSocket endpoints — another one on how to integrate WebSockets with nodejs. Elsewhere in the blogosphere, – Testing socket.io with mocha, should.js and socket.io clientsharing session between expressjs and socket.io

Testing background jobs

The background jobs bring batch processing to the nodejs ecosystem. Background jobs constitute a special use case of asynchronous communication that spans time and processes on which the system is running on.

Testing this kind of complex construct, require distilling the fundamental work done by each function/construct, by focusing on the signal without losing the big picture. It requires quite a paradigm shift(word used with reservation).

There is a chapter dedicated to testing background jobs in the book. There is an article Testing nodejs streams on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose models ~ CodeUtopia Blog

Conclusion

Some source code samples came from QA sites such as StackOverflow, hackers gists, Github documentation, developer blogs, and from my personal projects.

There are some aspects of the ecosystem that are not mentioned, not because they are not important, but because mentioning all of them can fit into a book.

In this article, we highlighted what it takes to test various layers, at the same time make a difference between BDD/TDD testing schools. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #testing #tdd #bdd

In the real world, jargon is sometimes confusing to the point some words may lose their intended meaning to some audiences. This blog re-injects clarity on some misconceptions around testing jargon.

In this article we will talk about:

  • Confusing technical terms around testing tools
  • Confusing technical terms around testing strategies
  • How to easily remember “which is what” around testing tools
  • How to easily remember “which is what” around test strategies

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//order/model.js
Order.find()
	.populate()
	.sort()
	.exec(function(err, order){ 
        /** ... */
    });

//order/middleware.js
new Order(params).save((error, order, next) => {
    if(error) return next(error, null);
    new OrderService(order).scheduleShipping();
    return next(null, order);
});

Example: Used to showcase test double usage

Asking the right questions

When trying to figure out the “what does what” with testing stacks, the following questions gives a clue on how to differentiate one concept from the next and one tool from the next:

This article is unfinished business and will be adding more content as I gather more confusing terms, or find some more interesting use cases.

  • What role does the test runner play when testing the code sample above
  • What role does the test doubles play when testing the code sample above
  • How do a test runner and test doubles stack up in this particular use case.
  • How do Stubbing differ from Mocking
  • How to Stubbing differs from Spy fakes ~ (functions with pre-programmed behavior that replace real behaviors)
  • How can we tell the function next(error, obj) has been called with — order or error arguments in our case?
  • What are other common technical terms that cause confusion around test strategies — Things such as unit test, integration test and end to end tests. Other less common, but confusing anyways, terms like smoke testing, exploratory testing, regression testing, performance testing, benchmarking, and the list goes on. We have to make an inventory of these terms(alone or in a team, and formulate a definition around each one before we adopt using them vernacular day-to-day communication).
  • What are other common technical terms that cause confusion around testing tools — which tools are more likely to cause confusion, and how do they stack up in our test cases (test double tools: mocking tools, spying tools, stubbing tools versus frameworks such as mocha chai sinon jest jasmine etc)
  • How to easily remember which is what terms around test strategies such as unit/regression/integration/smoke/etc tests
  • How to easily remember which is what terms around testing tools such as mocks/stubs/fakes/spies/etc tools.

In the test double universe, there is a clear distinction between a stub, a fake, a mock, and a spy. The stub is a drop-in replacement of a function or method we are not willing to run when testing a code block. A Mock, on the other hand, is a drop-in replacement of an object, most of the time used when data encapsulated in an object is expensive to get. The object can either be just data, or instance of a class, or both. A spy tells us when a function has been called, without necessarily replacing a function. A fake and a stub are pretty close relatives, and both replace function implementations.

Conclusion

In this blog, we explored differentiations around test strategies and testing tools. Without prescribing a solution, we let our thoughts play with possibilities around solutions and strategies that we can adapt for various use cases and projects.

References

#snippets #code #annotations #question #discuss