Simple Engineering

nodejs

nodejs application project layouts

The project layout follows a set of conventions around the project's codebase structure. Such conventions can be adopted by a team, or taken verbatim from the community of developers. This article will explore commonly adopted nodejs applications projects layouts.

In this article we will talk about:

  • Minimalist layouts
  • Multi-repository layouts
  • Mono-repository layouts
  • Modular layouts

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Minimalist layouts

The minimalist layout applies to the simplest programs. Such programs may take from one file to a couple of files, preferably under 10 files. Anything that goes beyond 10 is subject to reach 100 or even 1000 down the road. Minimal projects can be liberal on the choice of the name of files, as the directory structure is not really needed. The rule of thumb is YAGNI. It makes sense to take as a few files as possible, as long as we do not have an idea how big the program has to grow. Categorizing files by type(category) under these circumstances makes complete sense. Example of type (category) file structure may looks as /utils.js, /index.js etc.

When the applications start to grow, meaning beyond 10+ files, then it makes complete sense to organize files under directories. The question is, should we take a category approach such as /utils/index.js, controllers/index.js, or does it makes more sense to organize files by utility(feature), for instance /inbox/utils/index.js or catalogue/models/index.js? The next paragraphs will provide clarity on this.

Project layout by category

There are multiple categories of small programs that make a software suite running. When those are sliced following the layered architecture(models, views, controllers, services, etc), the project structure becomes by category(or by kind). In the early days of a project, when there is no clear specialization, it makes sense to keep it simple and organize directories by category.

The problem becomes a little messier, when requirements about adding an inbox, or catalog, or any other major feature gets added to the project. The next paragraph shows how we can specialize in directory structure as features get added to the project.

Project layout by feature

It may take a little longer to realize how organizing projects by features the application has to provide is really simple, or easy to track project progress. When a new major feature is added to a project, it makes it hard to isolate or detect how far the project is getting done, by simply looking at the project layout. The file organization by feature makes it clear how many features are in a project.

There are some concerns that this strategy may make code reusability a challenge, if not a mess. When you look at it, those concerns are perfectly legit. For example, the /inbox feature may be having model/user.js. By the time /admin feature gets added to the project, it is almost guaranteed that /admin/model/user.js and /inbox/model/user.js will certainly be representing the same thing. There should be an approach that makes sharing cross-feature code feasible. That is how a hybrid project layout comes into play.

Feature/Category Hybrid project layout

The hybrid model combines what is the best in both project ” layout by feature” and ” layout by category”. The starting point is the feature-based project layout. Let's take an example where a feature is a catalog (or inventory) for instance. The catalog may have a controller, model, service, and a bunch of views. Using the minimalist approach, as long as the catalog has only one controller, one model one service, and one view, those single categories can be made .js files, otherwise directories. Let's assume that further iterations require adding inventory, and both the inventory and catalog share the product model. One thing we want to avoid is to have a dependency between inventory and catalog, so it makes sense to add a directory where shared code can be stored. Such directories have recognizable names such as: /common, /core, or /lib. Our product model can be moved to /core/models/product.js. It worth highlighting that /core is in this case organized by category and not by feature. This closes our case of hybrid project layout.

Multi-repository layouts

The multi-repository layout is often based on git's workspace model. A parent project may inject other projects managed by git. Individual projects can be organized by category or by feature, or a mix of both(hybrid). The evident use case is when we have Backend code, SAP for frontend code, a bunch of migration scripts, or even programs such as Widgets and CLI.

Mono-repository layouts

The mono-repository, also known as monorepo approach makes monolyth sexy and easy to work with. The monolyth put all programs under one roof, which makes it hard to deploy especially in a CI environment or when dependencies are tightly coupled. monolyth are those applications that have a database, business, and rendering logic embedded not only in the same repository but also running at the same time when deployed. They are hard to maintain, depending on how big the project turns out to be, and they are quite a challenge to adopt monolyth when a program is being shipped multiple times a day.

Modular layouts

The modular approach is more aligned with what npm and nodejs's /node_modules have to offer. Each and every top-level directory(-ish) can serve as an independent complete application module.

Conclusion

In this article, we revisited various project layout schemes and assessed requirements to adopt one over the other. We barely scratched the surface, but there are additional complementary materials in the “Testing nodejs applications” book that digs deeper into the subject.

References

tags: #monorepo #monolyth #project #nodejs

This blog post highlights key points to consider when setting up a nodejs application workflow.

In this article we will talk about:

  • Key workflow that requires automation
  • Automating workflow using npm
  • Automating workflow using gulp
  • Inter-operable workflow using npm and gulp
  • Other tools: nx
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Key automation opportunities

Automation opportunities are grouped around tasks that tend to be manually repeated, over the course of the development lifecycle. Some of those opportunities are, but not limited to:

  • hot reloading the server after updating some source files
  • automatically executing tests after source/test code change
  • pre-commit lint/test/cleaning hooks

To name a few. There are two major workflow automation tools that are discussed in this article, but the process will be applicable to any tool the reader wishes to pick. Those tools are, but not limited to npm, yarn, gulp — and husky for git hooks.

Hot reload can be achieved using one of the following tools: nodemon, supervisor or forever. The choice of tools does not end here, as there is always something cooking in the community. To start a server in watch mode, instead of starting the server as: node server.js, we can use instead supervisor server.js. Later in the following sections, we will see how we can move this feature from the command line to npm scripts, or even to gulp tasks runner helper.

Workflow with npm

There are various issues related to relying on npm package globally installed on one system. Some of those issues are exposed when code changes hands and runs on another platform: deployment server, CI server, or even a developer computer other than ours. The npm package version provided globally, may not be the npm package version required by the project at hand. There is no indication to tell npm to use local package A instead of globally available package B. To eliminate that ambiguity, taking preference on all modules local to the project makes sense.

How to manage globally installed devDependencies ~ StackOverflow Question. How to Solve the Global npm Module Dependency Problem

Workflow with gulp

Running this to a remote server requires installing manually a global version of gulp. Many applications, may require a different gulp version. Typical gulp installation:

$ npm install gulp -g     #provides gulp `cli` globally
$ npm install gulp --save #provides gulp locally 

Since some applications may require a different version of gulp, adding gulp in package.json as in the following example makes sure the locally sourced gulp is run.

"scripts": {
  "gulp": "./node_modules/.bin/gulp"  
}

Example: equivalent to gulp when installed globally

Use case: running mocha test with npm

This section highlights important steps to get tests up and running. Examples provided here cover single runs, as well as watch mode.

While searching for a task runner, stability ease of use, and reporting capabilities come first. Even though mocha is easy to get started, other tools such as jasmine-node/ava or jest can do a pretty good job at a testing node as well. They worth giving a try.

supertest is a testing utility wrapper of superagent. It is useful when testing endpoints of REST API in end-to-end/contract/integration test scenarios. However, when working on unit tests, for that reason there is a need to intercept HTTP requests, mocking tools such as nock HTTP mocking framework deserves a chance.

Starting with command line test runner instructions, gives a pretty good baseline and an idea of how the npm script may end-up looking like. The following example, showcases how to run tests in watch mode, while instrumenting a select set of source code files for reporting purposes:

$ ./node_modules/.bin/istanbul cover \
    --dir ./test/coverage -i 'lib/**' \
    ./node_modules/.bin/_mocha -- --reporter \
    spec  test/**/*spec.js

istanbul is a reporting tool and will be used to generate reports, as tests progress.

# in package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# then run the tests using 

In case that code works just fine, we can go ahead and add it in the scripts section of package.json, and that will be enough to execute the test runner command from npm

There are additional features that make this setup a little more hectic to work with. Even though mocha is the choice of this blog, jest is also a pretty good alternative to the test node too.

{
  "test": "mocha -R spec  test/**/*spec.js",
  "test:compile": "mocha -R spec --compilers js:babel/register test/**/*spec.js",
  "watch": "npm test -- --watch",
  //Adding istanbul coverage + local istanbul + local mocha 
  "test": "./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js",
  //./node_modules/.bin/istanbul cover --dir ./test/coverage -i 'lib/**' ./node_modules/.bin/mocha -- --reporter spec  test/**/*spec.js =>produces<= "No coverage information was collected, exit without writing coverage information" 
}

When using istanbul cover mocha – Error: “No coverage information was collected, exit without writing coverage information” may be displayed. To avoid this error, the use of istanbul cover _mocha can make reporting available at the end of test execution.

Once the npm scripts are in place, we can leverage the command line once again, but this time using a smaller version of the command. We have to keep in mind that, most environments have to have npm available globally.

$ npm test --coverage
$ npm run test:watch
$ npm run test:compile

Use case: running mocha test with gulp

$ npm run gulp will use scripts > gulp version. The reason for using gulp while testing, is to have a smaller package.json scripts section. The tasks have to be written in ./gulpfile.js, and requires gulp plugins to work. gulp tasks can also take on more complex custom tasks such as deployment from the local machine, codemod and other various tasks using projects that do not have a cli tool yet.

Conclusion

In this article, we revisited key points to set up a nodejs workflow. The workflow explored goes from writing code to automated tasks such as linting, testing, and release. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #workflow #npm #nx

For practical reasons and ease of maintenance, deploying smaller portions(atomic deployments) to different platforms is one of the strategies to achieve zero-downtime deployments. This article discusses how some portions of a large-scale application, can be deployed to various cloud services, or servers, to keep the service running even some nodes stays down.

In this article we will talk about:

  • Cloud is somebody else's servers, therefore can fail
  • Plan B: Alternative services to provide when the main server is down
  • How to design the application for resiliency, avoid cloud vendor lock-in

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Modern days infrastructure

Cloud and serverless are somebody else's servers. As long as the server is in the equation, failures that lead to downtimes and whatnot are always going to be there, in one way or another. The price tag associated with running the application on somebody else's server, versus what it costs to roll out own infrastructure should also come into the picture when talking about nodejs infrastructure.

For more on cheap infrastructure, visit this github collection about free or cheap platforms good from prototype to beta version of your app.

Since Cloud-native and serverless platforms are servers at their core, meaning: can fail, having a Plan B or alternative services to back up to when the main service is down making a good investment.

To solve the nodejs infrastructure equation, we always have to solve the “how to design the application for resiliency, while at the same time avoiding cloud vendor lock-in” equation. The good ol' server, hosted on bare-bone metal server, takes time to set up, but solve a whole range of issues while, at the same time, keeping our independence intact.

To compensate, there are some CDN that we can tap into, to get assets faster to the user. The same applies to distributed caching solutions, that can save the main service dollars when paying for traffic.

In any case, it can be a good thing to split code into chunks that can be deployed to different servers. For example, in the case of pictures are hosted on one service(Netlify, etc), the database service(server) can be a completely different platform, and the REST API or messaging service, another.

For more on cheap bare-bone server providers, visit this link to make a choice

Conclusion

In this article, we revisited how to achieve cloud vendor lock-in independence, when, at the same time build resilient nodejs applications. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#nodejs #cloud #serverless #discuss

One of the reasons nodejs applications slow is lacking accountability when it comes to managing memory. We normally defer that memory management tasks the garbage collection. That is an answer to a couple of issues, that turned out to also to be a problem.

This blog takes a different approach, and only states facts about key memory hogs operations, and provides quick fixes, whenever there is, without going into too many details — or references.

In this article we will talk about:

  • Identifying memory leak issues.
  • Tracing nodejs application memory issues
  • Cleaning nodejs long-lasting objects
  • Production grade memory leak detection tools

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Memory Leak

Managing memory can be a daunting task in nodejs environment. Some strategies to detect and correct memory leaks can be found in the following articles.

This article is unfinished business and will be adding more content as I experience memory leak problems, or find some interesting use cases off Github and StackOverflow.

Conclusion

In this article, we focused on identifying potential sources of memory leaks in nodejs applications and provided early detection mechanisms so that the same problem cannot happen again. There are additional complimentary materials on memory management in the “Testing nodejs Applications book” book.

#snippets #performance #nodejs #memory-leak

This blog is at the same time a reflection taking the monorepos route, from decision-making perspective and implementation side. This document takes two approaches to the problem.

First, identify aspects that have to be containerized, test the containers both in development and production mode.

Second, create the actual codebase built atop containers, to test and deliver code to production. It will be better if the CI/CD is included in this package.

This article is under active development, more information is going to be added frequently, as I find free time.

In this article we will talk about:

  • What is the structure of a typical monorepo
  • What are the tools used to manage daily development activities
  • Compared to multirepo project layout, What are the main components of a monorepo that do not exist in a multirepo, and vice versa
  • How do packages relate to the actual application from the content perspective
  • What are the key differences between a monorepo and monolith
  • What are key differences between a monorepo and a multi-repos
  • How is monorepo different from git-submodules
  • Is it possible to leverage git submodule add <url> projects/url to compose multiple projects into one independent project
  • What are the best strategies for transitioning from multi-repos to monorepo
  • What are the best strategies for transitioning from monolith to monorepo
  • How do frontend/backend/widgets code repositories fit into monorepo project layout architecture
  • How to deploy monorepo projects to different platforms (frontend and widgets to CDN, backend to backend servers)
  • How to share the core packages amongst monorepo components, without using npm
  • How packaging monorepo works
  • Do monorepo allow to deploy and install private packages
  • How to automate versioning in a monorepo context
  • How to automate change-log in a monorepo context
  • How to automate release notes in a monorepo context
  • How to automate deployment in a monorepo context
  • How to manage deployment keys in a monorepo context

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

The monorepo project layout architecture

The monorepo project layout architecture is ideal when planning to share Web UI/SDKs/API and backend code.

What are key differences between a monorepo and monolith

  • Misconceptions about monorepos: monorepo != monolith ~ Nrwl Blog
  • monorepo and npm ~ npm Blog

How is monorepo different from git-submodules

Tools to manage monorepos

npm is not well-positioned to manage monorepos, as we write this article. There are however other tools that help to achieve that. Here is a non exhaustive list:

  • learna ~ community-backed
  • rush ~ Microsoft backed
  • yarn ~ Facebook backed. Uses workspaces, but in reality, they are monorepo components as well.

  • How to successfully manage a large scale JavaScript monorepo aka megarepo ~ Jonathan Creamer Blog

Sharing code between repos in the same monorepos, without needing npm

The package.json's private: true property makes sure npm doesn't search npm, but relies on git/github. One extra mile when using monorepos, is to share the code using tar files.

It is possible to use tar files. That will require reach release to have its own tar file that is deployable and reachable for a particular endpoint.

{
    dependencies: {
      "common-package": "file:../common-package", //OR
      "tar-common-package": "github.com/../../common-package.tar.gz",
    }
  }

github.com makes it possible to run installations(packages) from its servers.

Sharing code via npm without opening an npmjs.com account

This section introduces two concepts, with direct dependency on each other. The first concept deals with using git/github as an npm hub. This concept makes it possible to avoid opening an account on hosting services such as npmjs, while being able to keep private packages. The second concept makes sure private packages are installable using npm install just like regular packages.

Alternatively, there is a new infrastructure that github.com rolled out. Those are packages that can be installed using npm installer. We will see how to operate these as well.

How to manage authentication keys

One of the problems sharing a large codebase relates to security. How is it possible to share authentication keys, such as database passwords, without compromising the overall security of the application?

Containerization with Docker

Docker makes it possible to run a stack of applications, regardless of the system the application is developed on. Docker makes it possible to simulate with success the application behavior once the application finally hits the production servers.

Container Orchestration with kubernetes

If Docker symbolizes Containerization, kubernetes is aligning itself as the best container orchestration resource. This guide provides resources to get started, and the basic designs that are commonly used in the MEAN stack world.

Installation can be done via MacPorts or Homebrew. It is always possible to use binaries as well.

Containerized database

This section is an exploration of the implementation of a clustered database. We will see what it takes to deploy a containerized database, how to add upgrade new engines, how to backup and migrate data, how to migrate to new models.

How to deploy monorepo apps

In a multi-documents context, how to deploy one section to one platform and another section to another platform?

Every single build has to have a corresponding individual deploy script. Push to deploy would be a challenge, unless there is an alternative to selectively detect which part has to go were. Else, all servers running an instance of code, have to have a copy of the full monorepo code.

Conclusion

In this article, we reviewed what it takes, and reasons to move to a monorepo architecture. We also revisited the monorepo coupled with the containerization technique to deliver a better developer experience. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#nodejs #monorepos #multirepos #monolyths #microservice

The reactive aspect of nodejs applications is synonymous with the nodejs runtime itself. Even though the real-time aspect may be attributed to WebSocket implementation, the realtime reactive aspect of nodejs applications heavily rely on pub/sub mechanisms — most of the time backed by datastore engines like redis. This article explores how to integrate redis datastore into a nodejs application.

In this article we will talk about:

  • redis support with and without expressjs
  • redis support with and without WebSocket push mechanism
  • Alternatives to redis in nodejs world and beyond

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The following code sample showcase how nodejs/redis integration can be achieved. It demonstrates that it is still possible to share sessions via a middleware between socket.io and expressjs.

var app = express();
var server = Server(app);
var sio = require("socket.io")(server),
	redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
	sub = redis.createClient(rport, rhost);


function middleware(req, res, next){
 //session initialization thing
 next();
}

//socket.io/expressjs session sharing middleware
sio.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
sio.sockets.on("connection", function(socket) {
 
 //socket.request.session 
 //Now it's available from `socket.io` sockets too! Win!
 socket.on('message', (event) => {
	 var payload = JSON.parse(event.payload || event),
	 	user = socket.handshake.user || false;
	 
	 //except when coming from pub  			
	 pub.publish(payload.conversation, payload)); 
 });

 //redis listener
 sub.on('message', function(channel, event) {
	var payload = JSON.parse(event.payload || event),
		user = socket.handshake.user || false;
	sio.sockets.in(payload.conversation).emit('message', payload);
 });

Example: excerpt source: StackOverflow

What can possibly go wrong?

When trying to figure out how to approach redis datastore integration into nodejs application for inter-process communication and real-time feature, the following points may be a challenge:

  • How to decouple the WebSocket events from the redis specific (pub/sub) events. We should be able to decouple, but still provide an environment where interoperability is possible at any time.
  • How to make integration modular, testable, and overall friendly to the rest of the application ecosystem

When testing this implementation, we should expect the additional challenge to emerge:

  • The redis client instances (pub/sub) are created as soon as the library loads, and a redis server should be up and running by that time. The issue is when testing the application, there should be no server or any other system dependency hindering the application from being tested.
  • getting rid of redis server with a drop-in-replacement, or stubs/mocks, is more of a dream than reality ~ hard but feasible.

There is additional information in mocking and stubbing redis data store in “How to Mock redis datastore” article.

Conclusion

In this article, we revisited how to enhance nodejs application with redis based pub/sub mechanism, critical to having a reactive real-time experience. The use of WebSocket and Pub/Sub powered by a key/value data store was especially the main focus of this article. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #integration #redis

The reactive aspect of nodejs applications is synonymous with the nodejs runtime itself. However, the real-time magic is attributed to the WebSocket addition. This article introduces how to integrate WebSocket support within an existing nodejs application.

In this article we will talk about:

  • WebSocket support with or without socket.io
  • WebSocket support with or without expressjs
  • Modularizations of WebSocket

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

Express routes use socket.io instance to deliver messages of a socket/socket.io enabled application looks like following:

//module/socket.js - server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', fn); 
  io.on('disconnect',fn);
};
//OR
//module/socket.js 
var io = require('socket.io');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

//module/routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//in server.js 
var app = require('express').express(),
  server = require('http').createServer(app),
  sio = require('module/socket.js')(server);

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between SocketIO and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

//application app.js|server.js initialization, etc. 
require('module/routes')(server); ;               

What can possibly go wrong?

When working in this kind of environment, we will find these two points to be of interest, if not challenging:

  • For socket.io application to use same expressjs server instance, or sharing route instance with socket.io server
  • Sharing session between socket.io and expressjs application

Conclusion

In this article, we revisited how to add real-time experience to a nodejs application. The use of WebSocket and Pub/Sub powered by a key/value data store was especially the main focus of this article. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #integration #WebSocket

This post highlights snapshots on best practices/hacks, to code, test, deploy and to maintain large-scale nodejs apps. It provides big lines on what became a book on testing nodejs applications.

If you haven't yet, read the How to make nodejs applications modular article. This article is an overall follow-up.

Like some of the articles that came before this one, we are going to focus on a simple question as our north star: What are the most important questions developers have when testing a nodejs application? When possible a quick answer will be provided, else we will point in the right direction where information can be found.

In this article we will talk about:

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express'),
  app = express(),
  server = require('http').createServer(app);
//...
require('./config');
require('./utils/mongodb');
require('./utils/middleware')(app);
require('./routes')(app);
require('./realtime')(app, server)
//...
module.exports.server = server; 

Example:

The code provided here is a recap of How to make nodejs applications modular article. You may need to give it a test drive, as this section highlights an already modularized example.

Testing

Automation is what developers do for a living. Manual testing is tedious, repetitive, and those are two key characteristics of things we love automating. Automated testing is quite intimidating for newbies and veterans alike. Testing tends to be more of an art, the more you practice, the better you hone your craft.

In the blogosphere, – My node Test Strategy ~ RSharper Blog. – nodejs testing essentials

BDD versus TDD

Why should we even test

Testing is unanimous within the developers community, the question always is around how to go about testing.

There is a discussion mentioned in the first chapter between @kentbeck, @martinfowler and @dhh that made rounds on social media, blogs and finally as a subject of reflection in the community. When dealing with legacy code, there should be a balance and only adopt tdd as one tool in our toolbox.

In the book we do the following exercise alternative to classic tdd: read, analyze, modify if necessary, rinse and repeat. We cut the bullshit, and get to test whatever needs to be tested, and let nature take its course.

One thing is clear: We cannot guarantee the sanity of a piece of code unless it is tested. The remaining question is on “How” to go about testing.

There is a summary of the discussions mentioned earlier, titled Is TDD Dead?. In the blogosphere, – BDD-TDD ~ RobotLovesYou Blog. – My node Test Strategy ~ RSharper Blog – A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials

What should be tested

Before we dive into it, lets re-examine pros and cons of automated tests — in the current case, Unit Tests.

Pros:

  • Steer release confidence
  • Prevents common use case and unexpected bugs
  • Help project's new developers better understand code
  • Improves confidence when refactoring code
  • Well tested product guarantees improves customer experience

Cons:

  • Take time to write
  • Increase learning curve

At this point, if we agree that the pros outweigh the cons, we can set an ideal of testing everything. Those are features of a product or functions of code. Re-testing large applications manually are daunting, exhausting, and sometimes simply not feasible.

The good way to think about testing is not by thinking in terms of layers(controllers, models, etc.). Layers tend to be bigger. It is better to think in terms of something much smaller like a function(TDD way) or a feature(BDD way).

Brief, every controller/business logic/utility libraries/nodejs servers/routes all features are also set to be tested ahead of release.

There is an article on this blog that gives more insight on — How to create good test cases (Case > Feature > Expectations | GivenWhenThen) — titled “How to write test cases developers will love reading”. In the blogosphere, – Getting started with nodejs and mocha

Choosing the right testing tools

There is no shortage of tools in nodejs community. The problem is analysis paralysis. Whenever the time comes to choose testing tools, there are layers that should be taken into account: test runners, test doubles, reporting, and eventually, if there is any compiler that needs to be added in the mix.

Other than that, there is a list of a few things to consider when choosing a testing framework: – Learning curve – How easy to integrate into project/existing testing frameworks – How long does it take to debug testing code – Choice of the testing framework, and other testing tools consider – How good is documentation – How big is the community, and how good is the library maintained – What is may solve faster(Spies, Mocking, Coverage reports, etc) – Instrumentation and test reporting, just to name a few.

There are sections dedicated to providing hints and suggestions throughout the book. There is also this article “How to choose the right tools” on this blog that gives a baseline framework to choose, not only for testing frameworks but any tool. Finally, In the blogosphere, – jasmine vs. mocha, chai and sinon. – Evan Hahn has pretty good examples of the use of test doubles in How do I jasmine blog post. – Getting started with nodejs and jasmine – has some pretty amazing examples, and is simple to start with. – Testing expressjs REST APIs with Mocha

Testing servers

The not-so-obvious part when testing servers is how to simulation of starting and stopping the server. These two operations should not bootstrap dependent servers(database, data-stores) or make side effects(network requests, writing to files) to reduce the risk associated with running an actual server.

There is a chapter dedicated to testing servers in the book. There is also this [article on this blog that can give more insights](). In the blogosphere, – How to correctly unit test express server – There is a better code structure organization, that makes it easy to test and get good test coverage on “Testing nodejs with mocha”. – How to correctly unit test express server

Testing modules

Testing modules is not that different from testing a function, or a class. When we start looking at this from this angle, things will be a little easy.

The grain of salt: a module that is not directly a core component of our application, should be left alone and mocked out entirely when possible. This way we keep things isolated.

There are dedicated sections in every chapter about modularization, as well as a chapter dedicated to testing utility libraries(modules) in the book. There is also an entire series of articles — a more theoretical: “How to make nodejs applications modular and a more technical: “How to modularize nodejs applications” — on this blog modularization techniques. In the blogosphere, – Export This: Interface Design Patterns for nodejs Modules Alon Salant, CEO of Good Eggs and nodejs module patterns using simple examples by Darren DeRiderHow to modularize your Chat Application

Testing routes

Challenges while testing expressjs Routes

Some of the challenges associated with testing routes are testing authenticated routes, mocking requests, mocking responses as well as testing routes in isolation without a need to spin up a server. When testing routes, it is easy to fall into integration testing trap, either for simplicity or for lack of motivation to dig deeper.

Integration testing trap is When a developer confuses integration test(or E2E) with unit test, and vice versa. The success of a balanced test coverage identifies sooner the king of tests adequate for a given context, what percentage of each kind of tests.

For a test to be a unit test in route testing context, there will be – Focus to test code block(function, class, etc), not the output of a route – Mock requests to third party systems(Payment Gateway, Email Systems, etc) – Mock database read/write operations – Test worst-case scenario such as missing data and data-structure

There is a chapter dedicated to testing models in the book. There is also this article “Testing expressjs Routes” on this blog that gives more insight on the subject. In the blogosphere – A TDD approach to building a todo API using nodejs and mongodb – Marcus on supertest ~ Marcus Soft Blog

Testing controllers

When modularizing route handlers, there is a realization that they may also be grouped into a layer of their own, or event classes. In MVC jargon, this layer is also known as the controller layer.

Challenges testing controllers, by no surprise, are the same when testing expressjs route handlers. The controller layer thrives when there is a service layer. Mocking database read/write operations, or service layers, that is not core/critical to validation of the controller's expectations are some of such challenges.

Mocking controller request/response objects, and when necessary, some middleware functions.

There is a chapter dedicated to testing controllers in the book. There is also this article Testing nodejs controllers with expressjs framework on this blog that gives more insight on the subject. In the blogosphere, – This article covers Mocking Responses, etc — How to test express controllers.

Testing services

There are some instances where adding a service layer makes sense.

One of those instances is when an application has a collection of single functions under utility(utils). Chances are some of the functions under the utility umbrella may be related in terms of features, the functionality they offer, or both. Such functions are good to use case to be grouped under a class: service

Another good example is for applications that heavily use the model. Chances are the same functions can be re-used in multiple instances, and fixing an issue involves multiple places to fix as well. When that is the case, such functions can be grouped under one banner, in such a way that an update to one function, gets reflected in every instance where the function has been used.

From these two use cases, the testing service has no one-size fit-all testing strategy. Every case of service should be dealt with depending on the context it is operating in.

There is a chapter dedicated to testing services in the book. In the blogosphere, – “Building Structured Backends with nodejs and HexNut” by Francis Stokes ~ aka @fstokesman on Twitter source ...

Testing middleware

The middleware in a sense are hooks that intercept, process and forward the result to the rest of the route in the expressjs (connectjs) jargon. It is by no surprise that testing middleware shares the same challenges as testing route handlers and controllers.

There is a chapter dedicated to testing middleware in the book. There is also this article “Testing expressjs Middleware” on this blog that gives more insight on the subject. In the blogosphere, – How to test expressjs controllers

Testing asynchronous code

Asynchronous code is a wide subject in nodejs community. Things ranging from regular callbacks, promises, async/await constructs, streams, and event streams(reactive) are all under an asynchronous umbrella.

Challenges associated with asynchronous testing, depending on the use case and context at hand. However, there are striking similarities say, testing testing async/await versus a promise.

When an object is available, it makes sense to get a hold on it, execute assertions once it resolves. That is feasible for promises, streams, async/await construct. However, when the object is some kind of event, then the hold on the object can be used to add a listener and assert once the listener is resolved.

There are multiple chapters dedicated to testing asynchronous code in the book. There are also multiple article on this blog that gives more insight on the subject such as – “How to stub a stream function”“How to Stub Promise Function and Mock Resolved Output”“Testing nodejs streams”. In the blogosphere, – []()

Testing models

testing models goes hand in hand with mocking database access functions

Functions that access or change database state can be replaced by spy fakes, custom function replacements capable to supply|emulate similar results as replaced functions.

sinon may not make unanimity, but is a feature-complete battle-tested test double library, amongst many others to choose from.

There is a chapter dedicated to testing models in the book. There is also this article []() on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose modelsstubbing mongoose model question and answers on StackOverflow – Mocking database calls by wrapping mongoose with mockgoose

Testing WebSockets

Some of the challenges testing WebSockets can be summarized as trying to simulate: – sending and receiving a message on the WebSocket endpoint.

There is a chapter dedicated to testing WebSockets in the book. There is also this article on this blog that can give more ideas on how to go about testing WebSocket endpoints — another one on how to integrate WebSockets with nodejs. Elsewhere in the blogosphere, – Testing socket.io with mocha, should.js and socket.io clientsharing session between expressjs and socket.io

Testing background jobs

The background jobs bring batch processing to the nodejs ecosystem. Background jobs constitute a special use case of asynchronous communication that spans time and processes on which the system is running on.

Testing this kind of complex construct, require distilling the fundamental work done by each function/construct, by focusing on the signal without losing the big picture. It requires quite a paradigm shift(word used with reservation).

There is a chapter dedicated to testing background jobs in the book. There is an article Testing nodejs streams on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose models ~ CodeUtopia Blog

Conclusion

Some source code samples came from QA sites such as StackOverflow, hackers gists, Github documentation, developer blogs, and from my personal projects.

There are some aspects of the ecosystem that are not mentioned, not because they are not important, but because mentioning all of them can fit into a book.

In this article, we highlighted what it takes to test various layers, at the same time make a difference between BDD/TDD testing schools. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #testing #tdd #bdd

In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.

In this article we will talk about:

  • Mocking the Server instance
  • Mocking Route's Request/Response objects
  • Modularization of routes and revealing server instance
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getUsers = require('./users/get-user');
router.get('/users/:id', authenticated, getUser);
module.exports = router;

What can possibly go wrong?

When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:

  • Routes should be served at any time while testing
  • Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)
  • Mocking request/response objects to wipe need of a server out of the picture

Testing routes without spinning up a server

The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.


var app = require('express').express(),
  request = require('./support/http');

describe('req .route', function(){
  it('should serve on route /user/:id/edit', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.path).to.equal('/user/:id/edit');
      res.end();
    });

    request(app)
      .get('/user/12/edit')
      .expect(200, done);
  });
  it('should serve get requests', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.method).to.equal('get');
      res.end();
    });

    request(app)
    .get('/user/12/edit')
    .expect(200, done);
  });
});

Example:

example from so and supertest. supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a reasonable alternative. request = require('./support/http') is the utility that may use either of those two libraries to provide a request.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.
  • supertest, nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR "nyc test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Conclusion

To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.

Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.

Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tdd #testing #nodejs #expressjs #server

In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.

In this article we will talk about:

  • Mocking the Server instance
  • Mocking Route's Request/Response objects
  • Modularization of routes and revealing server instance
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getUsers = require('./users/get-user');
router.get('/users/:id', authenticated, getUser);
module.exports = router;

What can possibly go wrong?

When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:

  • Routes should be served at any time while testing
  • Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)
  • Mocking request/response objects to wipe need of a server out of the picture

Testing routes without spinning up a server

The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.


var app = require('express').express(),
  request = require('./support/http');

describe('req .route', function(){
  it('should serve on route /user/:id/edit', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.path).to.equal('/user/:id/edit');
      res.end();
    });

    request(app)
      .get('/user/12/edit')
      .expect(200, done);
  });
  it('should serve get requests', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.method).to.equal('get');
      res.end();
    });

    request(app)
    .get('/user/12/edit')
    .expect(200, done);
  });
});

Example:

example from so and supertest. supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a reasonable alternative. request = require('./support/http') is the utility that may use either of those two libraries to provide a request.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.
  • supertest, nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR "nyc test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Conclusion

To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.

Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.

Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tdd #testing #nodejs #expressjs #server