Simple Engineering

This article is going to explore how to deploy a nodejs application on a traditional linux server — in a non-cloud environment. Even though the use case is Ubuntu, any Linux distro or mac would work perfectly fine.

For information on deploying on non-traditional servers, read: “Deploying nodejs applications”. For zero-downtime knowledge, read “How to achieve zero downtime deployment with nodejs

In this article we will talk about:

  • Preparing nodejs deployable releases
  • Configuring nodejs deployment environment
  • Deploying nodejs application on bare metal Ubuntu server
  • Switching on nodejs application to be available to the world ~ adding nginx for the reverse proxy to make the application available to the world
  • post-deployment support — production support

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book book, the content can help any software developer to level up their knowledge. You may use this link to buy the book. Testing nodejs Applications Book Cover

Preparing a deployable release

There are several angles to look at release and deployment from. There are also several ways to release nodejs code, npm and tar for instance, and that depending on the environment in which the code is designed to run. Amongst environments, server-side, universal, or command line are classic examples.

In addition, we have to take a look from a dependency management perspective. Managing dependencies at deployment time has two vectors to take into account: whether the deployment happens on online or offline.

For the code to be prepared for release, it has to be packaged. Two methods of packaging nodejs software, amongst other things, are managed packaging and bundling. More on this is discussed here

As a plus, versioning should be taken into consideration when preparing a deployable release. The versioning that is a little common circulation is SemVer.

Configuring deployment environment

Before we dive into deployment challenges, let's look at key software and configuration requirements.

As usual, first-time work can be hard to do. But the yield should be then predictable, flexible to improvement, and capable of being built-upon for future deployments. For context, the deployment environment we are talking about in this section is the production environment.

Two key configurations are an nginx reverse proxy server and nodejs. But, “Why coupling nodejs server to an nginx reverse proxy server”? The answer to this question is two folds. First, both nodejs and nginx are single-threaded non-blocking reactive systems. Second, the wide adoption of these two tools by the developer community makes it an easy choice, from both influence and availability of collective knowledge the developer community shares via forums/blogs and popular QA sites.

How to install nginx server ~ [there is an article dedicated to this](). How to configure nginx as a nodejs application proxy server ~ there is an article dedicated to this.

Additional tools to install and configure may include: mongod database server, redis server, monit for monitoring, upstart for enhancing the init system.

There is a need to better understand the tools required to run nodejs application. It is also our responsibility as developers to have a basic understanding of each tool and the roles it plays in our project, in order to figure out how to configure each tool.

Download source code

Starting from the utility perspective, there is quite a collection of tools that are required to run on the server, alongside our nodejs application. Such software needs to be installed, and updated ~ for patch releases, and upgraded ~ to new major versions to keep the system secure and capable(bug-free/enhanced with new features).

From the packaging perspective, both supporting tools and nodejs applications adhere to a packaging strategy that makes it easy to deploy. When packaging is indeed a bundle, wget/curl can be used to download binaries. When dealing with discoverable packages, npm/yarn/brew can also be used to download our application and its dependencies. Both operation yield same outcomes, which is un-packaging, configuration and installation.

To deploy versioned nodejs application on bare metal Ubuntu server ~ understanding file system tweaks such as symlink-ing for faster deployments can save time for future deployments.

#first time on server side  
$ apt-get update
$ apt-get install git

#updating|upgrading server side code
$ apt-get update
$ apt-get upgrade
$ brew upgrade 
$ npm upgrade 

# Package download and installs 
$ /bin/bash -c "$(curl -fsSL https://url.tld/version/install.sh)"
$ wget -O - https://url.tld/version/install.sh | bash

# Discoverable packages 
$ npm install application@next 
$ yarn add application@next 
$ brew install application

_Example: _

The command above can be automated via a scheduled task. Both npm and yarn support the installation of applications bundled in a .tar file. See an example of a simple script source. We have to be mindful to clean up download directories, to save disk space.

Switching on the application

It sounds repetitive, but running npm start does not guarantee the application to be visible outside the metal server box the application is hosted on. This magic belongs to the nginx reverse proxy we were referring to in earlier paragraphs.

A typical nodejs application needs to start one or more of the following services, each time to reboot the application.

# symlinking new version to default application path
$ ln -sfn /var/www/new/version/appname /var/www/appname 

$ service nginx restart #nginx|apache server
$ service redis restart #redis server
$ service restart mongod #database server in some cases
$ service appname restart #application itself

Example:

PS: Above services are managed with uptime

Adding nginx reverse proxy makes the application available to the outside world. Switching on/off the application can be summarized in one command: service nginx stop. Likewise, to switch off and back on can be issued in one command: service nginx restart.

Post-deployment support

Asynchronous timely tasks can be used to resolve a wide range of issues. Background tasks such as fetching updates from third-party data providers, system health check, and notifications, automated software updates, database cleaning, cache busting, scheduled expensive/CPU intensive batch processing jobs just to name a few.

It is possible to leverage existing asynchronous timely OS-provided tasks processing infrastructure to achieve any of the named use cases, as it is true for third-party tools to do exactly the same job.

Rule of thumb

The following is a mental model that can be applied to the common use cases of releases. It may be basic for DevOps professionals, but useful enough for developers doing some operations work as well.

  • Prepare deployable releases
  • Update and install binaries ~ using apt, brew etc.
  • Download binaries ~ using git,wget, curl or brew
  • symlink directories(/log, /config, /app)
  • Restart servers and services ~ redis, nginx, mongodb and app
  • When something goes bad ~ walk two steps back. That is our rollback strategy.

This model can be refined, to make most of these tasks repeatable and automated, deployments included.

Conclusion

In this article, we revisited quick easy, and most basic nodejs deployment strategies. We also revisited how to expose the deployed applications to the world using nginx as a reverse proxy server. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

This blog post highlights key points to consider when setting up a nodejs application workflow.

In this article we will talk about:

  • Key workflow that requires automation
  • Automating workflow using npm
  • Automating workflow using gulp
  • Inter-operable workflow using npm and gulp
  • Other tools: nx
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Key automation opportunities

Automation opportunities are grouped around tasks that tend to be manually repeated, over the course of the development lifecycle. Some of those opportunities are, but not limited to:

  • hot reloading the server after updating some source files
  • automatically executing tests after source/test code change
  • pre-commit lint/test/cleaning hooks

To name a few. There are two major workflow automation tools that are discussed in this article, but the process will be applicable to any tool the reader wishes to pick. Those tools are, but not limited to npm, yarn, gulp — and husky for git hooks.

Hot reload can be achieved using one of the following tools: nodemon, supervisor or forever. The choice of tools does not end here, as there is always something cooking in the community. To start a server in watch mode, instead of starting the server as: node server.js, we can use instead supervisor server.js. Later in the following sections, we will see how we can move this feature from the command line to npm scripts, or even to gulp tasks runner helper.

Workflow with npm

There are various issues related to relying on npm package globally installed on one system. Some of those issues are exposed when code changes hands and runs on another platform: deployment server, CI server, or even a developer computer other than ours. The npm package version provided globally, may not be the npm package version required by the project at hand. There is no indication to tell npm to use local package A instead of globally available package B. To eliminate that ambiguity, taking preference on all modules local to the project makes sense.

How to manage globally installed devDependencies ~ StackOverflow Question. How to Solve the Global npm Module Dependency Problem

Workflow with gulp

Running this to a remote server requires installing manually a global version of gulp. Many applications, may require a different gulp version. Typical gulp installation:

$ npm install gulp -g     #provides gulp `cli` globally
$ npm install gulp --save #provides gulp locally 

Since some applications may require a different version of gulp, adding gulp in package.json as in the following example makes sure the locally sourced gulp is run.

"scripts": {
  "gulp": "./node_modules/.bin/gulp"  
}

Example: equivalent to gulp when installed globally

Use case: running mocha test with npm

This section highlights important steps to get tests up and running. Examples provided here cover single runs, as well as watch mode.

While searching for a task runner, stability ease of use, and reporting capabilities come first. Even though mocha is easy to get started, other tools such as jasmine-node/ava or jest can do a pretty good job at a testing node as well. They worth giving a try.

supertest is a testing utility wrapper of superagent. It is useful when testing endpoints of REST API in end-to-end/contract/integration test scenarios. However, when working on unit tests, for that reason there is a need to intercept HTTP requests, mocking tools such as nock HTTP mocking framework deserves a chance.

Starting with command line test runner instructions, gives a pretty good baseline and an idea of how the npm script may end-up looking like. The following example, showcases how to run tests in watch mode, while instrumenting a select set of source code files for reporting purposes:

$ ./node_modules/.bin/istanbul cover \
    --dir ./test/coverage -i 'lib/**' \
    ./node_modules/.bin/_mocha -- --reporter \
    spec  test/**/*spec.js

istanbul is a reporting tool and will be used to generate reports, as tests progress.

# in package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# then run the tests using 

In case that code works just fine, we can go ahead and add it in the scripts section of package.json, and that will be enough to execute the test runner command from npm

There are additional features that make this setup a little more hectic to work with. Even though mocha is the choice of this blog, jest is also a pretty good alternative to the test node too.

{
  "test": "mocha -R spec  test/**/*spec.js",
  "test:compile": "mocha -R spec --compilers js:babel/register test/**/*spec.js",
  "watch": "npm test -- --watch",
  //Adding istanbul coverage + local istanbul + local mocha 
  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js",
  //./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js =>produces<= "No coverage information was collected, exit without writing coverage information" 
}

When using istanbul cover mocha – Error: “No coverage information was collected, exit without writing coverage information” may be displayed. To avoid this error, the use of istanbul cover _mocha can make reporting available at the end of test execution.

Once the npm scripts are in place, we can leverage the command line once again, but this time using a smaller version of the command. We have to keep in mind that, most environments have to have npm available globally.

$ npm test --coverage
$ npm run test:watch
$ npm run test:compile

Use case: running mocha test with gulp

$ npm run gulp will use scripts > gulp version. The reason for using gulp while testing, is to have a smaller package.json scripts section. The tasks have to be written in ./gulpfile.js, and requires gulp plugins to work. gulp tasks can also take on more complex custom tasks such as deployment from the local machine, codemod and other various tasks using projects that do not have a cli tool yet.

Conclusion

In this article, we revisited key points to set up a nodejs workflow. The workflow explored goes from writing code to automated tasks such as linting, testing, and release. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #workflow #npm #nx

For practical reasons and ease of maintenance, deploying smaller portions(atomic deployments) to different platforms is one of the strategies to achieve zero-downtime deployments. This article discusses how some portions of a large-scale application, can be deployed to various cloud services, or servers, to keep the service running even some nodes stays down.

In this article we will talk about:

  • Cloud is somebody else's servers, therefore can fail
  • Plan B: Alternative services to provide when the main server is down
  • How to design the application for resiliency, avoid cloud vendor lock-in

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Modern days infrastructure

Cloud and serverless are somebody else's servers. As long as the server is in the equation, failures that lead to downtimes and whatnot are always going to be there, in one way or another. The price tag associated with running the application on somebody else's server, versus what it costs to roll out own infrastructure should also come into the picture when talking about nodejs infrastructure.

For more on cheap infrastructure, visit this github collection about free or cheap platforms good from prototype to beta version of your app.

Since Cloud-native and serverless platforms are servers at their core, meaning: can fail, having a Plan B or alternative services to back up to when the main service is down making a good investment.

To solve the nodejs infrastructure equation, we always have to solve the “how to design the application for resiliency, while at the same time avoiding cloud vendor lock-in” equation. The good ol' server, hosted on bare-bone metal server, takes time to set up, but solve a whole range of issues while, at the same time, keeping our independence intact.

To compensate, there are some CDN that we can tap into, to get assets faster to the user. The same applies to distributed caching solutions, that can save the main service dollars when paying for traffic.

In any case, it can be a good thing to split code into chunks that can be deployed to different servers. For example, in the case of pictures are hosted on one service(Netlify, etc), the database service(server) can be a completely different platform, and the REST API or messaging service, another.

For more on cheap bare-bone server providers, visit this link to make a choice

Conclusion

In this article, we revisited how to achieve cloud vendor lock-in independence, when, at the same time build resilient nodejs applications. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#nodejs #cloud #serverless #discuss

One of the reasons nodejs applications slow is lacking accountability when it comes to managing memory. We normally defer that memory management tasks the garbage collection. That is an answer to a couple of issues, that turned out to also to be a problem.

This blog takes a different approach, and only states facts about key memory hogs operations, and provides quick fixes, whenever there is, without going into too many details — or references.

In this article we will talk about:

  • Identifying memory leak issues.
  • Tracing nodejs application memory issues
  • Cleaning nodejs long-lasting objects
  • Production grade memory leak detection tools

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Memory Leak

Managing memory can be a daunting task in nodejs environment. Some strategies to detect and correct memory leaks can be found in the following articles.

This article is unfinished business and will be adding more content as I experience memory leak problems, or find some interesting use cases off Github and StackOverflow.

Conclusion

In this article, we focused on identifying potential sources of memory leaks in nodejs applications and provided early detection mechanisms so that the same problem cannot happen again. There are additional complimentary materials on memory management in the “Testing nodejs Applications book” book.

#snippets #performance #nodejs #memory-leak

This blog is at the same time a reflection taking the monorepos route, from decision-making perspective and implementation side. This document takes two approaches to the problem.

First, identify aspects that have to be containerized, test the containers both in development and production mode.

Second, create the actual codebase built atop containers, to test and deliver code to production. It will be better if the CI/CD is included in this package.

This article is under active development, more information is going to be added frequently, as I find free time.

In this article we will talk about:

  • What is the structure of a typical monorepo
  • What are the tools used to manage daily development activities
  • Compared to multirepo project layout, What are the main components of a monorepo that do not exist in a multirepo, and vice versa
  • How do packages relate to the actual application from the content perspective
  • What are the key differences between a monorepo and monolith
  • What are key differences between a monorepo and a multi-repos
  • How is monorepo different from git-submodules
  • Is it possible to leverage git submodule add <url> projects/url to compose multiple projects into one independent project
  • What are the best strategies for transitioning from multi-repos to monorepo
  • What are the best strategies for transitioning from monolith to monorepo
  • How do frontend/backend/widgets code repositories fit into monorepo project layout architecture
  • How to deploy monorepo projects to different platforms (frontend and widgets to CDN, backend to backend servers)
  • How to share the core packages amongst monorepo components, without using npm
  • How packaging monorepo works
  • Do monorepo allow to deploy and install private packages
  • How to automate versioning in a monorepo context
  • How to automate change-log in a monorepo context
  • How to automate release notes in a monorepo context
  • How to automate deployment in a monorepo context
  • How to manage deployment keys in a monorepo context

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

The monorepo project layout architecture

The monorepo project layout architecture is ideal when planning to share Web UI/SDKs/API and backend code.

What are key differences between a monorepo and monolith

  • Misconceptions about monorepos: monorepo != monolith ~ Nrwl Blog
  • monorepo and npm ~ npm Blog

How is monorepo different from git-submodules

Tools to manage monorepos

npm is not well-positioned to manage monorepos, as we write this article. There are however other tools that help to achieve that. Here is a non exhaustive list:

  • learna ~ community-backed
  • rush ~ Microsoft backed
  • yarn ~ Facebook backed. Uses workspaces, but in reality, they are monorepo components as well.

  • How to successfully manage a large scale JavaScript monorepo aka megarepo ~ Jonathan Creamer Blog

Sharing code between repos in the same monorepos, without needing npm

The package.json's private: true property makes sure npm doesn't search npm, but relies on git/github. One extra mile when using monorepos, is to share the code using tar files.

It is possible to use tar files. That will require reach release to have its own tar file that is deployable and reachable for a particular endpoint.

{
    dependencies: {
      "common-package": "file:../common-package", //OR
      "tar-common-package": "github.com/../../common-package.tar.gz",
    }
  }

github.com makes it possible to run installations(packages) from its servers.

Sharing code via npm without opening an npmjs.com account

This section introduces two concepts, with direct dependency on each other. The first concept deals with using git/github as an npm hub. This concept makes it possible to avoid opening an account on hosting services such as npmjs, while being able to keep private packages. The second concept makes sure private packages are installable using npm install just like regular packages.

Alternatively, there is a new infrastructure that github.com rolled out. Those are packages that can be installed using npm installer. We will see how to operate these as well.

How to manage authentication keys

One of the problems sharing a large codebase relates to security. How is it possible to share authentication keys, such as database passwords, without compromising the overall security of the application?

Containerization with Docker

Docker makes it possible to run a stack of applications, regardless of the system the application is developed on. Docker makes it possible to simulate with success the application behavior once the application finally hits the production servers.

Container Orchestration with kubernetes

If Docker symbolizes Containerization, kubernetes is aligning itself as the best container orchestration resource. This guide provides resources to get started, and the basic designs that are commonly used in the MEAN stack world.

Installation can be done via MacPorts or Homebrew. It is always possible to use binaries as well.

Containerized database

This section is an exploration of the implementation of a clustered database. We will see what it takes to deploy a containerized database, how to add upgrade new engines, how to backup and migrate data, how to migrate to new models.

How to deploy monorepo apps

In a multi-documents context, how to deploy one section to one platform and another section to another platform?

Every single build has to have a corresponding individual deploy script. Push to deploy would be a challenge, unless there is an alternative to selectively detect which part has to go were. Else, all servers running an instance of code, have to have a copy of the full monorepo code.

Conclusion

In this article, we reviewed what it takes, and reasons to move to a monorepo architecture. We also revisited the monorepo coupled with the containerization technique to deliver a better developer experience. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#nodejs #monorepos #multirepos #monolyths #microservice

The reactive aspect of nodejs applications is synonymous with the nodejs runtime itself. Even though the real-time aspect may be attributed to WebSocket implementation, the realtime reactive aspect of nodejs applications heavily rely on pub/sub mechanisms — most of the time backed by datastore engines like redis. This article explores how to integrate redis datastore into a nodejs application.

In this article we will talk about:

  • redis support with and without expressjs
  • redis support with and without WebSocket push mechanism
  • Alternatives to redis in nodejs world and beyond

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The following code sample showcase how nodejs/redis integration can be achieved. It demonstrates that it is still possible to share sessions via a middleware between socket.io and expressjs.

var app = express();
var server = Server(app);
var sio = require("socket.io")(server),
	redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
	sub = redis.createClient(rport, rhost);


function middleware(req, res, next){
 //session initialization thing
 next();
}

//socket.io/expressjs session sharing middleware
sio.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
sio.sockets.on("connection", function(socket) {
 
 //socket.request.session 
 //Now it's available from `socket.io` sockets too! Win!
 socket.on('message', (event) => {
	 var payload = JSON.parse(event.payload || event),
	 	user = socket.handshake.user || false;
	 
	 //except when coming from pub  			
	 pub.publish(payload.conversation, payload)); 
 });

 //redis listener
 sub.on('message', function(channel, event) {
	var payload = JSON.parse(event.payload || event),
		user = socket.handshake.user || false;
	sio.sockets.in(payload.conversation).emit('message', payload);
 });

Example: excerpt source: StackOverflow

What can possibly go wrong?

When trying to figure out how to approach redis datastore integration into nodejs application for inter-process communication and real-time feature, the following points may be a challenge:

  • How to decouple the WebSocket events from the redis specific (pub/sub) events. We should be able to decouple, but still provide an environment where interoperability is possible at any time.
  • How to make integration modular, testable, and overall friendly to the rest of the application ecosystem

When testing this implementation, we should expect the additional challenge to emerge:

  • The redis client instances (pub/sub) are created as soon as the library loads, and a redis server should be up and running by that time. The issue is when testing the application, there should be no server or any other system dependency hindering the application from being tested.
  • getting rid of redis server with a drop-in-replacement, or stubs/mocks, is more of a dream than reality ~ hard but feasible.

There is additional information in mocking and stubbing redis data store in “How to Mock redis datastore” article.

Conclusion

In this article, we revisited how to enhance nodejs application with redis based pub/sub mechanism, critical to having a reactive real-time experience. The use of WebSocket and Pub/Sub powered by a key/value data store was especially the main focus of this article. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #integration #redis

The reactive aspect of nodejs applications is synonymous with the nodejs runtime itself. However, the real-time magic is attributed to the WebSocket addition. This article introduces how to integrate WebSocket support within an existing nodejs application.

In this article we will talk about:

  • WebSocket support with or without socket.io
  • WebSocket support with or without expressjs
  • Modularizations of WebSocket

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

Express routes use socket.io instance to deliver messages of a socket/socket.io enabled application looks like following:

//module/socket.js - server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', fn); 
  io.on('disconnect',fn);
};
//OR
//module/socket.js 
var io = require('socket.io');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

//module/routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//in server.js 
var app = require('express').express(),
  server = require('http').createServer(app),
  sio = require('module/socket.js')(server);

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between SocketIO and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

//application app.js|server.js initialization, etc. 
require('module/routes')(server); ;               

What can possibly go wrong?

When working in this kind of environment, we will find these two points to be of interest, if not challenging:

  • For socket.io application to use same expressjs server instance, or sharing route instance with socket.io server
  • Sharing session between socket.io and expressjs application

Conclusion

In this article, we revisited how to add real-time experience to a nodejs application. The use of WebSocket and Pub/Sub powered by a key/value data store was especially the main focus of this article. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #integration #WebSocket

Building, testing, deploying, and maintaining large-scale applications is challenging in many ways. It takes discipline, structure, and rock-solid processes to succeed with production-ready nodejs applications. This document put together a collection of ideas and tribulations from a personal perspective so that you can avoid some mistakes and succeed with your project.

In this article we will talk about:

  • Avoiding integration test trap
  • Mocking strategically or applying code re-usability to mocks
  • Achieve a healthy test coverage

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Why

There is a lot of discussion around Unit testing. Developers don't like testing, only the testing approach differs from one person to another, from one system to the next one. It is also worth highlighting that some, if not the majority, skip TDD's way of doing business for alternatives. One thing is clear: You cannot guarantee the sanity of a piece of code unless it is tested. The “HOW” may be the problem we have to figure out and fix. In other words, should a test be carried out before or after writing the code?

Pro: Tests(Unit tests)

  • Increases confidence when releasing new versions.
  • Increases confidence when changing the code, such as during refactoring exercises.
  • Increase overall code health, and reduces bug count
  • Helps new developers to the project better understand the code

Cons:

  • Takes time to write, refactor and maintain
  • Increases codebase learning curve

What

There is a consensus that every feature should be tested before landing in a production environment. Since tests tend to be repetitive, and time-consuming, it makes sense to automate the majority of the tests, if not all. Automation makes it feasible to run regression tests on quite a large codebase and tends to be more accurate and effective than manually testing alone.

Layers that require particular attention while testing are:

  • Unit test controllers
  • Unit test business logic(services) and domain (models)
  • Utility library
  • Server start/stop/restart and anything in between those states
  • Testing routes(integration testing)
  • Testing secured routes

Questions we should keep in our mind while testing is: How to create good test cases? (Case > Feature > Expectations ) and How to unit test controllers, and avoid test to be integration tests

The beauty of having nodejs, or JavaScript in general, is that to some extent, some test cases can be reusable for back-end and for front-end code as well. Like any other component/module, unit test code should be refactored for better structure, readability than the rest of the codebase.

Choosing Testing frameworks

For those who bought in the idea of having a TDD way of doing business, here are a couple of things to consider when choosing a testing framework:

  • Learning curve
  • How easy to integrate into project/existing testing frameworks
  • How long does it take to debug testing code
  • How good is the documentation
  • How good is the community backing the testing framework, and how well the library happens to be maintained
  • How test doubles (Spies, Mocking, Coverage reports, etc) work within the framework. Third-party test doubles tend to beat framework native test doubles.

Conclusion

In this article, we revisited high-level objectives when testing a nodejs application deployable at scale. There are additional complimentary materials in the “Testing nodejs applications” book that dives deeper into integration testing trap and how to avoid it, how to achieve a healthy code coverage without breaking the piggy bank, as well as some thoughts on mocking strategically.

References

#snippets #code #annotations #question #discuss

This post highlights snapshots on best practices/hacks, to code, test, deploy and to maintain large-scale nodejs apps. It provides big lines on what became a book on testing nodejs applications.

If you haven't yet, read the How to make nodejs applications modular article. This article is an overall follow-up.

Like some of the articles that came before this one, we are going to focus on a simple question as our north star: What are the most important questions developers have when testing a nodejs application? When possible a quick answer will be provided, else we will point in the right direction where information can be found.

In this article we will talk about:

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express'),
  app = express(),
  server = require('http').createServer(app);
//...
require('./config');
require('./utils/mongodb');
require('./utils/middleware')(app);
require('./routes')(app);
require('./realtime')(app, server)
//...
module.exports.server = server; 

Example:

The code provided here is a recap of How to make nodejs applications modular article. You may need to give it a test drive, as this section highlights an already modularized example.

Testing

Automation is what developers do for a living. Manual testing is tedious, repetitive, and those are two key characteristics of things we love automating. Automated testing is quite intimidating for newbies and veterans alike. Testing tends to be more of an art, the more you practice, the better you hone your craft.

In the blogosphere, – My node Test Strategy ~ RSharper Blog. – nodejs testing essentials

BDD versus TDD

Why should we even test

Testing is unanimous within the developers community, the question always is around how to go about testing.

There is a discussion mentioned in the first chapter between @kentbeck, @martinfowler and @dhh that made rounds on social media, blogs and finally as a subject of reflection in the community. When dealing with legacy code, there should be a balance and only adopt tdd as one tool in our toolbox.

In the book we do the following exercise alternative to classic tdd: read, analyze, modify if necessary, rinse and repeat. We cut the bullshit, and get to test whatever needs to be tested, and let nature take its course.

One thing is clear: We cannot guarantee the sanity of a piece of code unless it is tested. The remaining question is on “How” to go about testing.

There is a summary of the discussions mentioned earlier, titled Is TDD Dead?. In the blogosphere, – BDD-TDD ~ RobotLovesYou Blog. – My node Test Strategy ~ RSharper Blog – A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials

What should be tested

Before we dive into it, lets re-examine pros and cons of automated tests — in the current case, Unit Tests.

Pros:

  • Steer release confidence
  • Prevents common use case and unexpected bugs
  • Help project's new developers better understand code
  • Improves confidence when refactoring code
  • Well tested product guarantees improves customer experience

Cons:

  • Take time to write
  • Increase learning curve

At this point, if we agree that the pros outweigh the cons, we can set an ideal of testing everything. Those are features of a product or functions of code. Re-testing large applications manually are daunting, exhausting, and sometimes simply not feasible.

The good way to think about testing is not by thinking in terms of layers(controllers, models, etc.). Layers tend to be bigger. It is better to think in terms of something much smaller like a function(TDD way) or a feature(BDD way).

Brief, every controller/business logic/utility libraries/nodejs servers/routes all features are also set to be tested ahead of release.

There is an article on this blog that gives more insight on — How to create good test cases (Case > Feature > Expectations | GivenWhenThen) — titled “How to write test cases developers will love reading”. In the blogosphere, – Getting started with nodejs and mocha

Choosing the right testing tools

There is no shortage of tools in nodejs community. The problem is analysis paralysis. Whenever the time comes to choose testing tools, there are layers that should be taken into account: test runners, test doubles, reporting, and eventually, if there is any compiler that needs to be added in the mix.

Other than that, there is a list of a few things to consider when choosing a testing framework: – Learning curve – How easy to integrate into project/existing testing frameworks – How long does it take to debug testing code – Choice of the testing framework, and other testing tools consider – How good is documentation – How big is the community, and how good is the library maintained – What is may solve faster(Spies, Mocking, Coverage reports, etc) – Instrumentation and test reporting, just to name a few.

There are sections dedicated to providing hints and suggestions throughout the book. There is also this article “How to choose the right tools” on this blog that gives a baseline framework to choose, not only for testing frameworks but any tool. Finally, In the blogosphere, – jasmine vs. mocha, chai and sinon. – Evan Hahn has pretty good examples of the use of test doubles in How do I jasmine blog post. – Getting started with nodejs and jasmine – has some pretty amazing examples, and is simple to start with. – Testing expressjs REST APIs with Mocha

Testing servers

The not-so-obvious part when testing servers is how to simulation of starting and stopping the server. These two operations should not bootstrap dependent servers(database, data-stores) or make side effects(network requests, writing to files) to reduce the risk associated with running an actual server.

There is a chapter dedicated to testing servers in the book. There is also this [article on this blog that can give more insights](). In the blogosphere, – How to correctly unit test express server – There is a better code structure organization, that makes it easy to test and get good test coverage on “Testing nodejs with mocha”. – How to correctly unit test express server

Testing modules

Testing modules is not that different from testing a function, or a class. When we start looking at this from this angle, things will be a little easy.

The grain of salt: a module that is not directly a core component of our application, should be left alone and mocked out entirely when possible. This way we keep things isolated.

There are dedicated sections in every chapter about modularization, as well as a chapter dedicated to testing utility libraries(modules) in the book. There is also an entire series of articles — a more theoretical: “How to make nodejs applications modular and a more technical: “How to modularize nodejs applications” — on this blog modularization techniques. In the blogosphere, – Export This: Interface Design Patterns for nodejs Modules Alon Salant, CEO of Good Eggs and nodejs module patterns using simple examples by Darren DeRiderHow to modularize your Chat Application

Testing routes

Challenges while testing expressjs Routes

Some of the challenges associated with testing routes are testing authenticated routes, mocking requests, mocking responses as well as testing routes in isolation without a need to spin up a server. When testing routes, it is easy to fall into integration testing trap, either for simplicity or for lack of motivation to dig deeper.

Integration testing trap is When a developer confuses integration test(or E2E) with unit test, and vice versa. The success of a balanced test coverage identifies sooner the king of tests adequate for a given context, what percentage of each kind of tests.

For a test to be a unit test in route testing context, there will be – Focus to test code block(function, class, etc), not the output of a route – Mock requests to third party systems(Payment Gateway, Email Systems, etc) – Mock database read/write operations – Test worst-case scenario such as missing data and data-structure

There is a chapter dedicated to testing models in the book. There is also this article “Testing expressjs Routes” on this blog that gives more insight on the subject. In the blogosphere – A TDD approach to building a todo API using nodejs and mongodb – Marcus on supertest ~ Marcus Soft Blog

Testing controllers

When modularizing route handlers, there is a realization that they may also be grouped into a layer of their own, or event classes. In MVC jargon, this layer is also known as the controller layer.

Challenges testing controllers, by no surprise, are the same when testing expressjs route handlers. The controller layer thrives when there is a service layer. Mocking database read/write operations, or service layers, that is not core/critical to validation of the controller's expectations are some of such challenges.

Mocking controller request/response objects, and when necessary, some middleware functions.

There is a chapter dedicated to testing controllers in the book. There is also this article Testing nodejs controllers with expressjs framework on this blog that gives more insight on the subject. In the blogosphere, – This article covers Mocking Responses, etc — How to test express controllers.

Testing services

There are some instances where adding a service layer makes sense.

One of those instances is when an application has a collection of single functions under utility(utils). Chances are some of the functions under the utility umbrella may be related in terms of features, the functionality they offer, or both. Such functions are good to use case to be grouped under a class: service

Another good example is for applications that heavily use the model. Chances are the same functions can be re-used in multiple instances, and fixing an issue involves multiple places to fix as well. When that is the case, such functions can be grouped under one banner, in such a way that an update to one function, gets reflected in every instance where the function has been used.

From these two use cases, the testing service has no one-size fit-all testing strategy. Every case of service should be dealt with depending on the context it is operating in.

There is a chapter dedicated to testing services in the book. In the blogosphere, – “Building Structured Backends with nodejs and HexNut” by Francis Stokes ~ aka @fstokesman on Twitter source ...

Testing middleware

The middleware in a sense are hooks that intercept, process and forward the result to the rest of the route in the expressjs (connectjs) jargon. It is by no surprise that testing middleware shares the same challenges as testing route handlers and controllers.

There is a chapter dedicated to testing middleware in the book. There is also this article “Testing expressjs Middleware” on this blog that gives more insight on the subject. In the blogosphere, – How to test expressjs controllers

Testing asynchronous code

Asynchronous code is a wide subject in nodejs community. Things ranging from regular callbacks, promises, async/await constructs, streams, and event streams(reactive) are all under an asynchronous umbrella.

Challenges associated with asynchronous testing, depending on the use case and context at hand. However, there are striking similarities say, testing testing async/await versus a promise.

When an object is available, it makes sense to get a hold on it, execute assertions once it resolves. That is feasible for promises, streams, async/await construct. However, when the object is some kind of event, then the hold on the object can be used to add a listener and assert once the listener is resolved.

There are multiple chapters dedicated to testing asynchronous code in the book. There are also multiple article on this blog that gives more insight on the subject such as – “How to stub a stream function”“How to Stub Promise Function and Mock Resolved Output”“Testing nodejs streams”. In the blogosphere, – []()

Testing models

testing models goes hand in hand with mocking database access functions

Functions that access or change database state can be replaced by spy fakes, custom function replacements capable to supply|emulate similar results as replaced functions.

sinon may not make unanimity, but is a feature-complete battle-tested test double library, amongst many others to choose from.

There is a chapter dedicated to testing models in the book. There is also this article []() on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose modelsstubbing mongoose model question and answers on StackOverflow – Mocking database calls by wrapping mongoose with mockgoose

Testing WebSockets

Some of the challenges testing WebSockets can be summarized as trying to simulate: – sending and receiving a message on the WebSocket endpoint.

There is a chapter dedicated to testing WebSockets in the book. There is also this article on this blog that can give more ideas on how to go about testing WebSocket endpoints — another one on how to integrate WebSockets with nodejs. Elsewhere in the blogosphere, – Testing socket.io with mocha, should.js and socket.io clientsharing session between expressjs and socket.io

Testing background jobs

The background jobs bring batch processing to the nodejs ecosystem. Background jobs constitute a special use case of asynchronous communication that spans time and processes on which the system is running on.

Testing this kind of complex construct, require distilling the fundamental work done by each function/construct, by focusing on the signal without losing the big picture. It requires quite a paradigm shift(word used with reservation).

There is a chapter dedicated to testing background jobs in the book. There is an article Testing nodejs streams on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose models ~ CodeUtopia Blog

Conclusion

Some source code samples came from QA sites such as StackOverflow, hackers gists, Github documentation, developer blogs, and from my personal projects.

There are some aspects of the ecosystem that are not mentioned, not because they are not important, but because mentioning all of them can fit into a book.

In this article, we highlighted what it takes to test various layers, at the same time make a difference between BDD/TDD testing schools. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #testing #tdd #bdd

In the real world, jargon is sometimes confusing to the point some words may lose their intended meaning to some audiences. This blog re-injects clarity on some misconceptions around testing jargon.

In this article we will talk about:

  • Confusing technical terms around testing tools
  • Confusing technical terms around testing strategies
  • How to easily remember “which is what” around testing tools
  • How to easily remember “which is what” around test strategies

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//order/model.js
Order.find()
	.populate()
	.sort()
	.exec(function(err, order){ 
        /** ... */
    });

//order/middleware.js
new Order(params).save((error, order, next) => {
    if(error) return next(error, null);
    new OrderService(order).scheduleShipping();
    return next(null, order);
});

Example: Used to showcase test double usage

Asking the right questions

When trying to figure out the “what does what” with testing stacks, the following questions gives a clue on how to differentiate one concept from the next and one tool from the next:

This article is unfinished business and will be adding more content as I gather more confusing terms, or find some more interesting use cases.

  • What role does the test runner play when testing the code sample above
  • What role does the test doubles play when testing the code sample above
  • How do a test runner and test doubles stack up in this particular use case.
  • How do Stubbing differ from Mocking
  • How to Stubbing differs from Spy fakes ~ (functions with pre-programmed behavior that replace real behaviors)
  • How can we tell the function next(error, obj) has been called with — order or error arguments in our case?
  • What are other common technical terms that cause confusion around test strategies — Things such as unit test, integration test and end to end tests. Other less common, but confusing anyways, terms like smoke testing, exploratory testing, regression testing, performance testing, benchmarking, and the list goes on. We have to make an inventory of these terms(alone or in a team, and formulate a definition around each one before we adopt using them vernacular day-to-day communication).
  • What are other common technical terms that cause confusion around testing tools — which tools are more likely to cause confusion, and how do they stack up in our test cases (test double tools: mocking tools, spying tools, stubbing tools versus frameworks such as mocha chai sinon jest jasmine etc)
  • How to easily remember which is what terms around test strategies such as unit/regression/integration/smoke/etc tests
  • How to easily remember which is what terms around testing tools such as mocks/stubs/fakes/spies/etc tools.

In the test double universe, there is a clear distinction between a stub, a fake, a mock, and a spy. The stub is a drop-in replacement of a function or method we are not willing to run when testing a code block. A Mock, on the other hand, is a drop-in replacement of an object, most of the time used when data encapsulated in an object is expensive to get. The object can either be just data, or instance of a class, or both. A spy tells us when a function has been called, without necessarily replacing a function. A fake and a stub are pretty close relatives, and both replace function implementations.

Conclusion

In this blog, we explored differentiations around test strategies and testing tools. Without prescribing a solution, we let our thoughts play with possibilities around solutions and strategies that we can adapt for various use cases and projects.

References

#snippets #code #annotations #question #discuss

Enter your email to subscribe to updates.