Simple Engineering

snippets

The configuration is one of the software component layers, and as such, should be testable in isolation like any other component. Modularization of the configuration layer improves its reusability and testability. The question we should be asking is How do we get there, and that is the objective of this article.

The 12 App Factor, a collection of good practices, advocates for “strict separation of configuration from code” and “storing configuration in environment variables”, among other things.

The 12 App Factor challenges the status quo, when it comes to configuration management. The following paragraph taken verbatim from the documentation is a clear illustration of that fact.

“A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.” ~ verbatim text from 12 App Factor ~ config section

In this article we will talk about:

  • Differentiation of configuration layers
  • How to decouple code from configuration
  • How to modularize configuration for testability
  • How to prevent configurations key leaks in public space

Techniques and ideas discussed in this blog, are available in more detail in “Configurations” chapter of the “Testing nodejs Applications” book. You can grab a copy on this link.

Show me the code

const Twitter = require('twitter');

function TwitterClient() {

    this.client = new Twitter({
        consumer_key: `Plain Text Twitter Consumer Key`,
        consumer_secret: `Plain Text Twitter Consumer Secret`,
        access_token_key: `Plain Text Twitter Access Token Key`,
        access_token_secret: `Plain Text Twitter Access Token Secret`
    });

    //accounts such as : @TechCrunch, @Twitter, etc 
    this.track = Array.isArray(accounts) ? accounts.join(',') : accounts;
    //ids: corresponding Twitter Accounts IDs 816653, 783214, etc  
    this.follow = Array.isArray(ids) ? ids.join(',') : ids;
}

/**
 * <code>
 * let stream = new TwitterClient('@twitter', '783214').getStream();
 * stream.on('error', error => handleError(error));
 * stream.on('data', tweet => logTweet(tweet));
 * </code>
 * @name getStream - Returns Usable Stream
 * @returns {Object<TwitterStream>}
 */
TwitterClient.prototype.getStream = function(){
    return this.client.stream('statuses/filter', {track: this.track, follow: this.follow});
};

Example:

What can possibly go wrong?

When trying to figure out how to approach modularizing of configurations, the following points may be a challenge:

  • Being able to share the source code without leaking public keys to the world
  • Laying down a strategy to move configurations into configuration files
  • Making configuration settings as testable as any module.

The following sections will explore more on making points stated above work.

Layers of configuration of nodejs applications

Although this blog article provides basic understanding of configuration modularization, it defers configuration management to another blog post: Configuring nodejs applications”.

From a production readiness perspective, at least in the context of this blog post, there are two distinct layers of application configurations.

The first layer consists of configurations that nodejs application needs to execute intrinsic business logic. They will be referred to as environment variables/settings. Third-party issued secret keys or server port number configurations, fall under this category. In most cases, you will find such configurations in static variables found in the application.

The second layer consists of configurations required by a system that is going to host the nodejs application. Database server settings, monitoring tools, SSH keys, and other third-party programs running on the hosting entity, are few examples that fall under this category. We will refer to these as system variables/settings.

This blog will be about working with the first layer: environment settings.

Decoupling code from configuration

The first step in decoupling configuration from code is to identify and normalize the way we store our environment variables.

module.exports = function hasSecrets(){
    const SOME_SECRET_KEY = 'xyz=';
    ...
};

Example: function with an encapsulated secret

The previous function encapsulates secret values that can be moved outside the application. If we apply this technique, SOME_SECRET_KEY will be moved outside the function, and imported whenever needed instead.

const SOME_SECRET_KEY = require("./config").SOME_SECRET_KEY;

module.exports = function hasSecrets(){
    ...
};

Example: function with a decoupled secret value

This process has to be repeated all over the application, till every single secret value is replaced with its constant equivalent. It doesn't have to be good on the first try, it has simply to work. We can make it better later on.

Configuration modularization

For curiosity's sake, how does the config.js looks like, after “decoupling configuration from code” step would look like, at the end of the exercise?

export const SOME_SECRET_KEY = 'xyz=';

Example: the first iteration of decoupling configuration from code

This step works but has essentially two key flaws:

  • In a team of multiple players, each player having its own environment variables, the config.js will become a liability. It doesn't scale that well.
  • This strategy will not prevent catastrophe of leaking the secret to the public, in case the code becomes open source.

To mitigate this, we are going to introduce After normalization of the way we store and retrieve environment variables, the next step is how to organize the results in a module. Modules are portable and easy to test.

Modularization makes it possible to test configuration in isolation. Yes, we will have to prove to ourselves it works, before we convince others that it does!

Measures to prevent private key leakage

The first line of defense when it comes to preventing secret keys from leaking to the public is to make sure not a single private value is stored in the codebase itself. The following example illustrates this statement.

module.exports = function leakySecret(){
    const SOME_SECRET_KEY = 'xyz=';
    ...
};

Example: function with a leak-able secret key

The second line of defense is to decouple secret values from an integral part of the application, and use an external service to provision secret values at runtime. nodejs makes it possible to read the process content.

A simple yet powerful tool is dotenv library. This library can be swapped, depending on taste or project requirements.

One of the alternatives to dotenv includes convict.js.

Last but not least, since we are using git, to add .gitignore prevents contributors to commit their .env files by accident to the shared repository.

dotenv-extended makes it possible to read *nix variables into a dotenv file.

require('dotenv').config();
const Twitter = require('twitter');

function TwitterClient(accounts, ids) {
    this.client = new Twitter({
        consumer_key: process.env.TWITTER_CONSUMER_KEY,
        consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
        access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
        access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET
    });
    ...
}

Example: preventing .env files from being checked into the central repository

Conclusion

Modularization is key to crafting re-usable composable software components. The configuration layer is not an exception to this rule. Modularization of configurations brings elegance, ease of management of critical information such as security keys.

In this article, we re-asserted that with a little bit of discipline, without breaking our piggy bank, it is still possible to better manage application configurations. Modularization of configuration makes it possible to reduce the risk of secret key leaks as well increasing testability readiness. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #nodejs #configuration

WebSocket protocol is an extension to the HTTP protocol that makes near real-time communication magic a reality. Adding this capability to an already complex application does not make large-scale applications any easier to work with. Using modularization techniques to decoupling the real-time portion of the application makes maintenance a little easier. The question is How do we get there?. This article applies modularization techniques to achieve that.

There is a wide variety of choice to choose from when it comes to WebSocket implementation in nodejs ecosystem. For simplicity, this blog post will provide examples using socket.io, but ideas expressed in this blog are applicable to any other nodejs WebSocket implementation.

In this article we will talk about:

  • How to modularize WebSocket for reusability
  • How to modularize WebSocket for testability
  • How to modularize WebSocket for composability
  • The need for a store manager in a nodejs WebSocket application
  • How to integrate redis in a nodejs WebSocket application
  • How to modularize redis in a nodejs WebSocket application
  • How to share session between an HTTP server and WebSocket server

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//in server.js 
var express = require('express'); 
var app = express();
...
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});
...
var server = require('http').createServer(app);
server.listen(app.get('port'), () => console.log(`Listening on ${ process.env.PORT || 8080 }`));
var wss = require('socket.io')(server);
//Handling realtime data
wss.on('connection|connect', (socket, event) => {
    socket.on('error', () => {});
    socket.on('pong', () => {this.isAlive = true;});
    socket.on('disconnect', () => {});
    socket.on('message', () => {});
});

What can possibly go wrong?

The following points may be a challenge when modularizing WebSocket nodejs applications:

  • WebSocket handlers are tightly coupled to the rest of the application. The challenge is how to reverse that.
  • How to modularize for optimal code reuse and easy testability

The following sections will explore more on making points stated above work.

How to modularize WebSocket for reusability

When looking at the WebSocket handlers, there is something that strikes our eyes. Every handler has a signature that looks like any event handler common in the JavaScript ecosystem. We also realize that handlers are tightly coupled to the WebSocket object. To break the coupling, we can apply one technique: eject handlers from WebSocket, inject WebSocket and Socket objects whenever possible(composition).

How to modularize WebSocket for testability

As noted earlier, the WebSocket event handlers are tightly coupled to the WebSocket object. Mocking the WebSocket object comes with a hefty price: to lose implementation of the handlers. To avoid that, we can tap into two techniques: eject handlers, and loading the WebSocket via a utility library.

How to modularize WebSocket for composability

It is possible to shift the perspective on the way the application is adding WebSocket support. The question we can also ask is: Is it possible to restructure our code, in such a way that it requires only one line of code to wipe out the WebSocket support?. An alternative question would be: Is it possible to add WebSocket support to the base application, only using one line of code?. To answer these two questions, we will tap into a technique similar to the one we used to mount app instance to a set of routers(API for example)

The need of a store manager in a nodejs WebSocket application

JavaScript, for that matter nodejs, is a single-threaded programming language.

However, that does not mean that parallel computing is not feasible. The threading model can be replaced with a process-based model when it comes to parallel computing. This enhancement comes with an additional challenge: How to make it possible for processes to communicate or share data, especially when processes are running on two separate CPUs.

The answer is using a third-party process/es that handles inter-process communications. Key stores are good examples that make the magic possible.

How to integrate redis in a nodejs WebSocket application

redis comes with an expressive API that makes it easy to integrate with an existing nodejs application.

It makes sense to question the approach used while adding this capability to the application. In the following example, any message received on the wire will be logged into the shared redis key store.

All subscribed message listeners will then be notified about an incoming message. In the event there is a response to send back, the same approach will be followed, and the listener will be responsible to send the message again down the wire. This process may be repetitive, but it is one of the good ways to handle this kind of scenario.

There is an entire blog dedicated to modularizing redis clients here

How to modularize redis in a nodejs WebSocket application

The example of integration with redis in nodejs application is tightly coupled to redis event handlers. Ejecting handlers can be a good starting point. Grouping ejected handlers in a module can follow suit. The next step in modularization can be composing(inject redis) on the resulting modules when needed.

How to share sessions between the HTTP server and WebSocket server.

If we look closer, especially when dealing with namespaces, we find a similarity between HTTP requests(handled by express in our example) and WebSocket messages(handled by socket.io in our example). For applications that require authentication, or any other type of session on the server-side, it would be not necessary to have one authentication per protocol. To solve this problem, we will rely on a middleware that passes session data between two protocols.

Modularization reduces the complexity associated with large scale node js applications in general. We assume thatsocket.io/expressjs` applications won't be an exception in the current context. In a real-time context, we focus on making most parts accessible to be used by other components and tests.

Express routes use socket.io instance to deliver some messages Structure of a socket/socket.io enabled application looks like following:

//module/socket.js
//server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', fn); 
  io.on('disconnect',fn);
};
    
//in server.js 
var express = require('express'); 
var app = express();
var server = require('http').createServer('app');

//Application app.js|server.js initialization, etc. 
require('module/socket.js')(server);       
        

For socket.io app to use same Express server instance or sharing route instance with socket.io server

//routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//socket.js - has socket communication code
var io = require('socket.io');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

Socket Session sharing

Sharing session between socket.io and Express application

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between `socket.io` and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings not only elegance but makes copy/paste detectors happy, and at the same time improves both performance and testability.

In this article, we revisited how to aggregate WebSocket code into composable and testable modules. The need to group related tasks into modules involves the ability to add support of Pub/Sub on demand and using various solutions as project requirements evolve. There are additional complimentary materials in the “Testing nodejs applications” book.

References + Reading List

tags: #snippets #code #annotations #question #discuss

The ever-growing number of files does not spare test files. The number of similar test double files can be used as an indication of a need to refactor or modularize, test doubles. This blog applies the same techniques we used to modularize other layers of a nodejs application, but in an automated testing context.

In this article we will talk about:

  • The need to have test doubles
  • How utilities library relates to fixtures library
  • Reducing repetitive imports via a unified export library
  • How to modularize fixtures of spies
  • How to modularize fixtures of mock data
  • How to modularize fixtures of fakes
  • How to modularize fixtures of stubs
  • How to modularize test doubles for reusability
  • How to modularize test doubles for composability

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var should = require('should');
var expect = require('expect');
var chai = require('chai');

Example:

What can possibly go wrong?

The following points may be a challenge when modularizing test doubles:

  • Some testing libraries share dependencies with the project they are supposed to tests
  • Individual test doubles can be replicated in multiple places
  • With this knowledge, How can we reduce the waste and reuse most of the dependencies?

In the next sections, we make a case on modularization for reusability as a solution to reduce code duplication.

The Status Quo

Every test double library is in fact an independent library. That remains true even when some libraries are bundled and shipped together, as is the case for chai(ships with should and expect). Every mock, every spy, and every stub we make in one place can potentially be replicated to multiple other places that test similar code-blocks, or code-blocks that share dependencies.

One of the solutions to share common test double configurations across multiple test cases is to organize test doubles in modules.

The need to have test doubles in tests.

In these series, there is one blog that discusses the difference between various test doubles: spy/mock/stubs/fake and fixtures. For the sake of brevity, that will not be our concern for the moment. Our concern is to reflect on the why we should have test doubles in the first place.

From the time and cost perspective, It takes longer to load one single file. It would take even longer to load multiple files, be in parallel or sequentially. The higher the number of test cases spanning multiple files, the slower the test runner process will take to complete execution. This adds more execution time, to an already slow process.

If there is one of amongst other improvements that would save us time, reusing the same library quite often while mimicking implementation of other things we don't really need to load(mocking/etc.), would be one of them.

Testing code acts as a state machine, or pure functions, every input results in the same output. Test doubles are essentially tools that can help us save time and cost as a drop-in replacement of expected behaviors.

How utilities relate to fixtures

In this section, we pause a little bit to answer the question: “How utilities library relates to fixtures library”.

Utility libraries(utilities) provide some tools that are not necessarily related to the core business of the program, but necessary to complete a set of tasks. The need to have utilities is not limited to business logic only, but also to testing code. In the context of tests, the utilities are going to be referred to as fixtures. Fixtures can have computations or data that emulates a state under which the program has to be tested.

Grouping imports with unified export library

The module system provided by the nodejs is a double-edged sword. It presents opportunities to create granular systems, but repetitive imports weakness the performance of the application.

To reduce repetitive imports, we make good use of the index. This compensates for our rejection to attach modules to the global object. It also makes it possible to abstract away the file structure: one doesn't have to know the whole project's structure to import just one single function.

How to modularize fixtures of spies

The modularization of spies takes one step in general. Since the spies already have a name, It makes sense to group them under the fixture library, by category or feature, and export the resulting module. The use of the index file makes it possible to export complex file systems via one single import(or export depending on perspective).

How to modularize fixtures of mock data

Mock data is the cornerstone to simulate desired test state when one kind of data is injected into a function/system. Grouping related data under the same umbrella makes sense in most cases. After the fact, it makes sense to manifest data via export constructs.

How to modularize fixtures of fakes

Fakes are functions similar to implementation they are designed to replace, most of the time third-party functionality, that can be used to simulate original behavior. When two or more fakes share striking similarities, they become good candidates for mergers, refactoring, and modularization.

How to modularize fixtures of stubs

Stubs are most of the time taken as mocks. That is because they tend to operate in similar use cases. A stub is a fake that replaces real implementations, and capable of receiving and producing a pre-determined outcome using mock data. The modularization will take a single step, in case the stub is already named. The last step is to actually export and reveal/expose the function as an independent/exportable function.

How to modularize test doubles for reusability

Test doubles are reusable in nature. There is no difference between designing functions/classes and test doubles for reusability per se. To be able to reuse a class/function, that function has to be exposed to the external world. That is where export construct comes into the picture.

How to modularize test doubles for composability

The composability on the other side is the ability for one module to be reusable. For that to happen, the main client that is going to be using the library has to be injected into the library, either via a thunk or similar strategy. The following example shows how two or more test doubles can be modularized for composability.

Some Stubbing questions we have to keep in mind – How do Stubbing differ from Mocking – How to Stubbing differs from Spying: Spies/Stubs functions with pre-programmed behavior – How to know if a function has been called with a specific argument?: For example: I want to know the res.status(401).send() — more has been discussed in this blog as well: spy/mock/stubs/fake and fixtures

Making chai, should and expect accessible

The approach explained below makes it possible to make pre-configured chai available in a global context, without attaching chai explicitly to the global Object.

  • There are multiple ways to go with modularization, but the most basic is using exports.
  • This technique will not make any library default but is designed to reduce the boilerplate when testing.
var chai = require('chai');
module.exports.chai = chai; 
module.exports.should = chai.should; 
module.exports.expect = chai.expect; 

Example:

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings elegance, improves performance, and in this case, re-usability of test doubles across the board.

In this article, we revisited how to test double modularization can be achieved by leveraging the power of module.exports( or export in ES7+). The ever-increasing number of similar test double instances make them good candidates to modularize, at the same time makes it is imperative that the modularization has to be minimalistic. That is the reason why we leveraged the index file to make sure we do not overload already complex architectures. There are additional complimentary materials in the “Testing nodejs applications” book, on this very same subject.

References

tags: #snippets #nodejs #spy #fake #mock #stub #test-doubles #question #discuss

Testing functions attached to objects, other than a class instance, constitutes an intimidating edge case from first sight. Such objects range from object literals to modules. This blog explores some test doubles techniques to shine a light on such cases.

For context, the difference between a function and a method, is that a method is a function encapsulated into a class.

In this article we will talk about:

  • Key difference between a spy, stub, and a fake
  • When it makes sense a spy over a stub

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var fs = require('fs');

module.exports.removeUserPhoto = function(req, res, next){
    let filepath = `/path/to/photos/${req.params.photoId}.jpg`;
    fs.unlink(filepath, (error) => {
        if(error) return next(error);
        return res.status(200).json({
            message: `Your photo is removed - Photo ID was ${req.params.photoId}`
        });
    });    
}

Example: A simple controller that takes a PhotoID and deletes files associated to it

What can possibly go wrong?

Some challenges when mocking chained functions:

  • Stubbing a method, while keeping original callback behavior intact

Show me the tests

From the How to mock chained functions article, there are three relevant to the current context avenues we leverage for our mocking strategy.


let outputMock = { ... };
sinon.stub(obj, 'func').returns(outputMock);
sinon.stub(obj, 'func').callsFake(function fake(){ return outputMock; })
let func = sinon.spy(function fake(){ return outputMock; });

We can put those approaches to test in the following test case

var sinon = require('sinon');
var assert = require('chai').assert;

// Somewhere in your code. 
it('#fs:unlink removes a file', function () {
    this.fs = require('fs');
    var func = function(fn){ return fn.apply(this, arguments); };//mocked behaviour 
    
    //Spy + Stubbing fs.unlink function, to avoid a real file removal
    var unlink = sinon.stub(this.fs, "unlink", func);
    assert(this.fs.unlink.called, "#unlink() has been called");

    unlink.restore(); //restoring default function 
});

Conclusion

In this article, we established the difference between stub/spy and fake concepts, how they work in concert to deliver effective test doubles, and how to leverage their drop-in-replacement capabilities when testing functions.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

Mocking and Stubbing walk hand in hand. In this blog, we document stubbing functions with promise constructs. The use cases are going to be based on Models. We keep in mind that there is a clear difference between mocking versus stub/spying and using fakes.

In this article we will talk about:

  • Stub a promise construct by replacing it with a fake
  • Stub a promising construct by using third-party tools
  • Mocking database-bound input and output

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


//Lab Pet
window.fetch('/full/url/').then(function(res){ 
    service.doSyncWorkWith(res); 
    return res; 
}).catch(function(err){ 
    return err;
});

Example:

What can possibly go wrong?

When trying to figure out how to approach stub functions that return a promise, the following points may be a challenge:

  • How to deal with the asynchronous nature of the promise.
  • Making stubs drop-in replacements of some portion of the code block, and leave intact anything else.

The following sections will explore more on making points stated above work.

Content

  • From Johnny Reeves Blog: Stub the services' Async function, then return mocked response

var sinon = require('sinon');
describe('#fetch()', function(){
    before(function(){ 
        //one way
        fetchStub = sinon.stub(window, 'fetch').returns(bakedPromise(mockedResponse));
        //other way
        fetchStub = sinon.stub(window, 'fetch', function(options){ 
            return bakedPromise(mockedResponse);
        });
        //other way
        fetchStub = sinon.stub(window, 'fetch').resolves(mockedResponse);

    });
    after(function(){ fetchStub.restore(); });
    it('works', function(){
        //use default function like nothing happened
        window.fetch('/url');
        assert(fetchStub.called, '#fetch() has been called');
        //or 
        assert(window.fetch.called, '#fetch() has been called');
    });
    it('fails', function(){
            //one way
        fetchStub = sinon.stub(window, 'fetch', function(options){ 
            return bakedFailurePromise(mockedResponse);
        });
        //another way using 'sinon-stub-promise's returnsPromise()
        //PS: You should install => npm install sinon-stub-promise
        fetchStub = sinon.stub(window, 'fetch').returnsPromise().rejects(reasonMessage);

    });
});

Example:

  • bakedPromise() is any function that takes a Mocked(baked) Response and returns a promise
  • This approach doesn't tell you if Service.doJob() has been expected. For That:
  • source
  • source

Conclusion

In this article, we established the difference between Promise versus regular callbacks and how to stub promise constructs, especially in database operations context, and replacing them with fakes. Testing tends to be more of art, than science, proactive makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

The stream API provides a heavy-weight asynchronous computation model that keeps a small memory footprint. As exciting as it may sound, testing streams is somehow intimidating. This blog layout some key elements necessary to be successful when mocking stream API.

We keep in mind that there is a clear difference between mocking versus stub/spying/fakes even though we used mock interchangeably.

In this article we will talk about:

  • Understanding the difference between Readable and Writable streams
  • Stubbing Writable stream
  • Stubbing Readable stream
  • Stubbing Duplex or Transformer streams

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var  gzip = require('zlib').createGzip();//quick example to show multiple pipings
var route = require('expressjs').Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
        let rstream = fs.createReadStream('./several-TB-of-songs.json'); 
        rstream.
            pipe(new MetadataStreamTransformer()).
            pipe(gzip).
            pipe(res);
        // forwaring the error to next handler     
        rstream.on('error', (error) => next(error, null));
});

At a glance The code is supposed to read a very large JSON file of TB of metadata about songs, apply some transformations, gzip, and send the response to the caller, by piping the results on the response object.

The next example demonstrates how a typical transformer such as MetadataStreamTransformer looks like

const inherit = require('util').inherits;
const Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    this.options = Object.assign({}, options, {objectMode: true});//<= re-enforces object mode chunks
    Transform.call(this, this.options);
}
inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};

Inheritance as explained in this program might be old, but illustrates good enough in a prototypal way that our MetadataStreamTransformer inherits stuff fromStream#Transformer

What can possibly go wrong?

stubbing functions in stream processing scenario may yield the following challenges:

  • How to deal with the asynchronous nature of streams
  • Identify areas where it makes sense to a stub, for instance: expensive operations
  • Identifying key areas needing drop-in replacements, for instance reading from a third party source over the network.

Primer

The keyword when stubbing streams is:

  • To identify where the heavy lifting is happening. In pure terms of streams, functions that executes _read() and _write() are our main focus.
  • To isolate some entities, to be able to test small parts in isolation. For instance, make sure we test MetadataStreamTransformer in isolation, and mock any response fed into .pipe() operator in other places.

What is the difference between readable vs writable vs duplex streams? The long answer is available in substack's Stream Handbook

Generally speaking, Readable streams produce data that can be feed into Writable streams. Readable streams can be .piped on, but not into. Readable streams have readable|data events, and implementation-wise, implement ._read() from Stream#Readable interface.

Writable streams can be .piped into, but not on. For example, res examples above are piped to an existing stream. The opposite is not always guaranteed. Writable streams also have writable|data events, and implementation-wise, implement _.write() from Stream#Writable interface.

Duplex streams go both ways. They have the ability to read from the previous stream and write to the next stream. Transformer streams are duplex, implement ._transform() Stream#Transformer interface.

Modus Operandi

How to test the above code by taking on smaller pieces?

  • fs.createReadStream won't be tested, but stubbed and returns a mocked readable stream
  • .pipe() will be stubbed to return a chain of stream operators
  • gzip and res won't be tested, therefore stubbed to returns a writable+readable mocked stream objects
  • rstream.on('error', cb) stub readable stream with a read error, spy on next() and check if it has been called upon
  • MetadataStreamTransformer will be tested in isolation and MetadataStreamTransformer._transform() will be treated as any other function, except it accepts streams and emits events

How to stub stream functions

describe('/songs', () => {
    before(() => {
        sinon.stub(fs, 'createReadStream').returns({
            pipe: sinon.stub().returns({
                pipe: sinon.stub().returns({
                    pipe: sinon.stub().returns(responseMock)
                })
            }),
            on: sinon.spy(() => true)
        })
    });
});

This way of chained stubbing is available in our toolbox. Great power comes with great responsibilities, and wielding this sword may not always be a good idea.

There is an alternative at the very end of this discussion

The transformer stream class test in isolation may be broken down to

  • stub the whole Transform instance
  • Or stub the .push() and simulate a write by feeding in the readable mocked stream of data

the stubbed push() is a good place to add assertions

it('_transform()', function(){
    var Readable = require('stream').Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, 'push', function(data){
        assert.isNumber(data.id);//testing data sent to callers. etc
        return true;
    });
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    rstream.pipe(tstream);
    expect(tstream.push.called, '#push() has been called');
    mockPush.restore(); 
});

How to Mock Stream Response Objects

The classic example of a readable stream is reading from a file. This example shows how mocking fs.createReadStream and returns a readable stream, capable of being asserted on.

//stubb can emit two or more streams + close the stream
var rstream = fs.createReadStream();
sinon.stub(fs, 'createReadStream', function(file){ 
    //trick from @link https://stackoverflow.com/a/33154121/132610
    assert(file, '#createReadStream received a file');
    rstream.emit('data', "{id:1}");
    rstream.emit('data', "{id:2}");
    rstream.emit('end');
    return false; 
});

var pipeStub = sinon.spy(rstream, 'pipe');
//Once called this above structure will stream two elements: good enough to simulate reading a file.
//to stub `gzip` library: another transformer stream: producing 
var next = sinon.stub();
//use this function| or call the whole route 
getter(req, res, next);
//expectations follow: 
expect(rstream.pipe.called, '#pipe() has been called');

Conclusion

In this article, we established the difference between Readable and Writable streams and how to stub each one of them when unit test.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #TDD #streams #nodejs #mocking

Scheduled tasks are hard to debug. Inherent to their asynchronous nature, bugs in scheduled tasks strike later, anything that can help prevent that behavior and curb failures ahead of time are always good to have.

Unit testing is one of the effective tools to challenge this behavior. The question we have an answer for is How to test scheduled tasks in isolation. This article introduces some techniques to do that. Using modularization techniques on scheduled background tasks, we will shift focus to making chunks of code-block accessible to testing tools.

In this article we will talk about:

  • How to define a job(task)
  • How to trigger a job(task)
  • How to modularize tasks for testability
  • How to modularize tasks for reusability
  • How to modularize tasks for composability
  • How to expose task scheduling via a RESTful API
  • Alternatives to the agenda scheduling model

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The following example shows how Job trigger can be used under an expressjs route:


//jobs/email.js
var email = require('some-lib-to-send-emails'); 
var User = require('./models/user.js');

module.exports = function(agenda) {
  agenda.define('registration email', function(job, done) {
    User.findById(job.attrs.data.userId, function(err, user) {
       if(err) return done(err);
       	var message = ['Thanks for registering ', user.name, 'more content'].join('');
      	return email(user.email, message, done);
     });
  });
  agenda.define('reset password', function(job, done) {/* ... more code*/});
  // More email related jobs
};

//route.js
//lib/controllers/user-controller.js
var app = express(),
    User = require('../models/user-model'),
    agenda = require('../worker.js');

app.post('/users', function(req, res, next) {
  var user = new User(req.body);
  user.save(function(err) {
    if(err) return next(err);
    //@todo - Schedule an email to be sent before expiration time
    //@todo - Schedule an email to be sent 24 hours
    agenda.now('registration email', { userId: user.primary() });
    return res.status(201).json(user);
  });
});

Example:

What can possibly go wrong?

When trying to figure out how to approach modularization of nodejs background jobs, the following points may be quite a challenge on their own:

  • abstraction, and/or injecting, background job library into an existing application
  • abstraction or making schedule jobs outside the application.

The following sections will explore more on making points stated above work.

How to define a job

agenda library comes with an expressive API. The interface provides two sets of utilities, one of which is .define(), and does the task definition chore. The following example illustrates this idea.

agenda.define('registration email', 
  function(metadata, done) {

});

How to trigger a job

As stated earlier, the agenda library comes with an interface to trigger a job or schedule an already defined job. The following example illustrates this idea.

agenda.now('registration email', {userId: userId});
agenda.every('3 minutes', 'delete old users');
agenda.every('1 hour', 'print analytics report');

How to modularize tasks for reusability

There is a striking similarity between event handling and task definition.

That similarity raises a whole new set of challenges, one of which turns out to be a tight coupling between task definition and the library that is expected to execute those jobs.

The refactoring technique we have been using all along is handy in the current context as well. We have to eject job definition from agenda library constructs. The next step in refactoring iteration is to inject agenda object as a dependency, whenever it is needed.

The modularization cannot end at this point, we also need to export individual jobs (task handlers) and expose those exported modules via an index file.

How to modularize tasks for testability

There challenges when mocking any object that applies to agenda instance as well.

Implementation of jobs(or task handlers) will be lost, as soon as a stub/fake is provided. The arguments stating that stubs will play well are valid, as long as independent jobs(task handlers) are tested in isolation.

To avoid the need to mock the agenda object in multiple instances, loading agenda from a dedicated library provides quite a good solution to this issue.

How to modularize tasks for composability

In these modularization series, we focused on one perspective. There is no restriction to turn the tables and see things from an opposite vantage point. We can take agenda as an injectable object. The classic approach is the one used with injecting(or mounting) app instances in a set of reusable routes(RESTful APIs).

How to expose task scheduling via a RESTful API

One of the reasons to opt for agenda for background task processing is its ability to persist jobs in a database, and resume pending jobs even after a database server shutdown, crash, or data migration from one instance to the next.

This makes it easy to integrate job processing in regular RESTful APIs. We have to remember that background tasks are mainly designed to run like cronjobs.

Alternatives to agenda scheduling model

In this article we approached job scheduling from a library perspective, agenda. agenda is certainly one of the multiple other solutions in the wild, for instance, cronjobs.

Another viable alternative is tapping into system-based solutions such as monit for Linux and systemctl for macOS.

There is a discussion on how to use nodejs to execute monit tasks in this blog and monit service poll time.

Modularization of Scheduled Tasks

Modularization of scheduled tasks requires 2 essential steps, as for any other module. The first step is to make sure the job definition and job trigger(invocation) is exportable, the same way independent functions do. The second step is to provide access to it, via index.

The next two steps help to achieve these two objectives. Before we dive into it, it worth clarifying a couple of points.

  • Tasks can be scheduled from dedicated libraries, cronjobs, and software such as monit.
  • There are a lot of libraries to choose from such as bull and bee or kue. agenda is chosen for clarification purposes.
  • Task invocation can be triggered from the socket, routes, and agenda handlers
  • Example of delayed tasks is sending an email at a given time, deleting inactive accounts, data backup, etc.

agenda uses mongodb to store job descriptions. Good choice in case the project under consideration relies on mongodb for data persistence.Example Project Structure

Conclusion

Modularization is key when crafting re-usable composable software. Scheduled tasks are not an exception to this rule. Background jobs modularization brings elegance to the codebase, reduces copy/paste instances, improves performance and testability.

In this article, we revisited how to increase background jobs more testable, by leveraging key modularization techniques. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #scheduled-jobs #nodejs

A server requires the use of network resources, some of which perform expensive read/write operations. Testing servers introduce side effects, some of which expensive, and may cause unintended consequences when not mocked in the testing phase. To limit the chances of breaking something, testing servers have to be done in isolation.

The question to ask at this stage, is How to get there?. This blog article will explore some of the ways to answer this question.

The motivation for modularization is to reduce the complexity associated with large-scale expressjs applications. In nodejs servers context, we will shift focus on making sure most of the parts are accessible for tests in isolation.

In this article we will talk about:

  • How to modularize nodejs server for reusability.
  • How to modularize nodejs server for testability.
  • How to modularize nodejs server for composability.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

nodejs application server comes in two flavors. Using native nodejs library, or adopting a server provided via a framework, in our case expressjs.

Using expressjs framework a classic server code looks as is the following example:

var express = require('express'),
    app = express()
/** .. more routes + code for app ... */
app.get('/', function (req, res) {
  return res.send('Hello World!')
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
//source: https://expressjs.com/en/starter/hello-world.html

Example:

As the requirement increases, this file becomes exponentially big. The most application runs on top of expressjs a popular library in nodejs world. To keep the server.js small, regardless of requirements and dependent modules, moving most of the code into modules makes a difference.

var http = require('http'),
  hostname = 'localhost',
  port = process.env.PORT || 3000,
  server = http.createServer(function(req, res){
    res.statusCode = 200;
    res.setHeader('Content-Type', 'text/plain');
    res.end('Hello World\n');
  });

//Alternatively
var express = require('express'),
    app = express(),
    require('app/routes')(app),
    server = http.createServer(app);

server.listen(port, hostname, function (){
  console.log(['Server running at http://',hostname,':',port].join());
});
//source: https://nodejs.org/api/synopsis.html#synopsis_example

Example:

What can possibly go wrong?

When trying to figure out how to approach modularizing nodejs servers, the following points may be a challenge:

  • Understanding where to start, and where to stop with server modularization
  • Understanding key parts that need abstraction, or how/where to inject dependencies
  • Making servers testable

The following sections will explore more on making points stated above work.

How to modularize nodejs server for reusability

How to apply modularization technique in a server context or How to break down larger server file into a smaller granular alternative.

The server reusability becomes an issue when it becomes clear that the server bootstrapping code either needs some refactoring or presents an opportunity to add extra test coverage.

In order to make the server available to the third-party sandboxed testing environment, the server has to be exportable first.

In order to be able to load and mock/stub certain areas of the server code, still the server has to be exportable.

Like any other modularization technique we used, two steps are going to be in play. Since our case concerns multiple players, for instance, expressjs WebSocket and whatnot, we have to look at the server like an equal of those other possible servers.

How to modularize nodejs server for testability

Simulations of start/stop while running tests are catalysts of this exercise.

Testability and composability are other real drives to get the server to be modular. A modular server makes it easy to load the server as we load any other object into the testing sandbox, as well as mocking any dependency we deem unnecessary or prevents us to get the job done.

Simulation of Start/Stop while running testsHow to correctly unit test express server – There is a better code structure organization, that make it easy to test, get coverage, etc. Testing nodejs with mocha

The previous example shows how simpler becomes server initialization, but that comes with the additional library to install. Modularization of the above two code segments makes it possible to test the server in isolation.

module.exports = server;

Example: Modularization – this line makes server available in our tests ~ source

How to modularize nodejs server for composability

The challenge is to expose the HTTP server, in a way redis/websocket or agenda can re-use the same server. Making the server injectable.

The composability of the server is rather counter-intuitive. In most cases, the server will be injected into other components, for those components to mount additional server capabilities. The code sample proves this point by making the HTTP server available to a WebSocket component so that the WebSocket can be aware and mounted/attached to the same instance of the HTTP server.

var http = require('http'), 
    app = require('express')(),
    server = http.createServer(app),
    sio = require("socket.io")(server);

...

module.exports = server;

Conclusion

Modularization is key in making nodejs server elegant, serve as a baseline to performance improvements and improved testability. In this article, we revisited how to achieve nodejs server modularity, with stress on testability and code reusability. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #nodejs #expressjs

We assume most of the system components to be accessible for testability. However, that is challenging when routes are a little bit complex. To reduce the complexity that comes with working on large-scale expressjs routes, we will apply a technique known as manifest routes to make route declarations change proof, making them more stable as the rest of the application evolves.

In this article we will talk about:

  • The need to have manifest routes technique
  • How to apply the manifest routes as a modularization technique

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express')
var app = express();

app.get('/', function(req, res, next) {  
  res.render('index', { title: 'Express' });
});

/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});

What can possibly go wrong?

When trying to figure out how to approach modularization of expressjs routes with a manifest route pattern, the following points may be a challenge:

  • Where to start with modularization without breaking the rest of the application
  • How to introduce the layered architecture, without incurring additional test burden, but making it easier to isolate tests

The following sections will explore more on making points stated above work.

The need to have manifest routes technique

There is a subtle nuance that is missing when following traditional approaches to modularization.

When adding an index file, as a part of the modularization process, exporting the content of directories, for that matter — sub-directories, does not result in exporting routes that can be plugged into existing expressjs applications.

The remedy is to create, isolate, export, and manifest them to the outer world.

How to apply the manifest routes handlers for reusability

The handlers are a beast in their own way.

A collection of related route handlers can be used as a baseline to create the controller layer. The modularization of this newly created/revealed layer can be achieved in two steps as was the case for other use cases. The first step consists of naming, ejecting, and exporting single functions as modules. The second step consists of adding an index to every directory and exporting the content of the directory.

Manifest routes

In essence, requiring a top-level directory, will seek for index.js at top of the directory and make all the route content accessible to the caller.

var routes = require('./routes'); 

Example: /routes has index.js at top level directory ~ source

A typical default entry point of the application:

var express = require('express');  
var router = express.Router();

router.get('/', function(req, res, next) {  
  return res.render('index', { title: 'Express' });
});
module.exports = router;  

Example: default /index entry point

Anatomy of a route handler

module.exports = function (req, res) {  };

Example: routes/users/get-user|new-user|delete-user.js

“The most elegant configuration that I've found is to turn the larger routes with lots of sub-routes into a directory instead of a single route file” – Chev source

When individual routes/users sub-directories are put together, the resulting index would look as in the following code sample

var router = require('express').Router();  
router.get('/get/:id', require('./get-user.js'));  
router.post('/new', require('./new-user.js'));  
router.post('/delete/:id', require('./delete-user.js'));  
module.exports = router;    

Example: routes/users/index.js

Update when routes/users/favorites/ adds more sub-directories

router.use('/favorites', require('./favorites')); 
...
module.exports = router;

Example: routes/users/index.js ~ after adding a new favorites requirement

We can go extra mile and group route handlers in controllers. Using route and controllers' route handler as a controller would look as in the following example:

var router = require('express').Router();
var catalogues = require('./controllers/catalogues');

router.route('/catalogues')
  .get(catalogues.getItem)
  .post(catalogues.createItem);
module.exports = router;

Conclusion

Modularization makes expressjs routes reusable, composable, and stable as the rest of the system evolves. Modularization brings elegance to route composition, improved testability, and reduces instances of redundancy.

In this article, we revisited a technique that improves expressjs routes elegance, their testability, and re-usability known under the manifest route moniker. We also re-state that the manifest route technique is an extra mile to modularizing expressjs routes. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #modularization #manifest-routes #nodejs #expressjs

divide et impera

One of the key issues working with large-scale nodejs applications is the management of complexity. Modularization shifts focus to transform the codebase into reusable, easy-to-test modules. This article explores some techniques used to achieve that.

This article is more theoretical, “How to make nodejs applications modular” may help with that is more technical.

In this article we will talk about:

  • Exploration of modularization techniques available within the ecosystem
  • Leveraging module.exports or import/export utilities to achieve modularity
  • Using the index file to achieve modularity
  • How above techniques can be applied at scale

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

This piece of code is going to go through modularization in “How to make nodejs applications modular” blog. As for now, we will highlight failures and points of interest down below.

var express = require('express');
var app = express();

/**Data Layer*/
var mongodb = require("mongodb");
mongoose.connect('mongodb://localhost:27017/devdb');
var User = require('./models').User; 

/**
 * Essential Middelewares 
 */
app.use(express.logger());
app.use(express.cookieParser());
app.use(express.session({ secret: 'angrybirds' }));
app.use(express.bodyParser());
app.use((req, res, next) => { /** Adding CORS support here */ });

app.use((req, res) => res.sendFile(path.normalize(path.join(__dirname, 'index.html'))));


/** .. more routes + code for app ... */
app.get('/', function (req, res) {
  return res.send('Hello World!');
});


/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

/**
 * More code, more time, more developers 
 * Then you realize that you actually need:
 */ 
app.get('/admin/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});
/**
 * This would work just fine, but we may also have a requirement to listen to Twitter changes 
app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
*/

var server = require('http').createServer(app);
server.listen(app.get('port'), () => console.log(`Listening on ${ process.env.PORT || 8080 }`));
var wss = require('socket.io')(server);
//Handling realtime data
wss.on('connection'(socket, event) => {
    socket.on('error', () => {});
    socket.on('pong', () => {});
    socket.on('disconnect', () => {});
    socket.on('message', () => {});
});

Example:

What can possibly go wrong?

When trying to navigate strategies around modularization of nodejs applications, the following points may be a challenge:

  • Where to start with modularization
  • How to choose the right modularization technique.

The following sections will explore more on making points stated above work.

Modules

In nodejs context, anything from a variable to function, to classes, or an entire library qualifies to become modules.

A module can be seen as an independent piece of code dedicated to doing one and only one task at a time. The amalgamation of multiple tasks under one abstract task, or one unit of work, is also good module candidates. To sum up, modules come in function, objects, classes, configuration metadata, initialization data, servers, etc.

Modularization is one of the techniques used to break down a large software into smaller malleable, more manageable components. In this context, a module is treated as the smallest independent composable piece of software, that does only one task. Testing such a unit in isolation becomes relatively easy. Since it is a composable unit, integrating it into another system becomes a breeze.

Leveraging exports

To make a unit of work a module, nodejs exposes import/export, or module.exports/require, utilities. Therefore, modularization is achieved by leveraging the power of module.exports in ES5, equivalent to export in ES7+. With that idea, the question to “Where to start with modularization?” becomes workable.

Every function, object, class, configuration metadata, initialization data, or the server that can be exported, has to be exported. That is how Leveraging module.exports or import/export utilities to achieve modularity looks like.

After each individual entity becomes exportable, there is a small enhancement that can make importing the entire library, or modules, a bit easier. Depending on project structure, be feature-based or kind-based.

At this point, we may ask ourselves if the technique explained above can indeed scale. Simply put, Can the techniques explained above scale?

The large aspect of large scale application combines Lines of Code(20k+ LoC), number of features, third party integrations, and the number of people contributing to the project. Since these parameters are not mutually exclusive, a one-person project can also be large scale, it has to have fairly large lines of code involved or a sizable amount of third-party integrations.

nodejs applications, as a matter of fact like any application stack, tend to be big and hard to maintain past a threshold. There is no better strategy to manage complexity than breaking down big components into small manageable chunks.

Large codebases tend to be hard to test, therefore hard to maintain, compared to their smaller counterparts. Obviously, nodejs applications are no exception to this.

Leveraging the index

Using the index file at every directory level makes it possible to load modules from a single instruction. Modules at this point in time, are supposed to be equivalent or hosted in the same directory. Directories can mirror categories(kind) or features, or a mixture of both. Adding the index file at every level makes sure we establish control over divided entities, aka divide and conquer.

Divide and conquer is one of the old Roman Empire Army techniques to manage complexity. Dividing a big problem into smaller manageable ones, allowed the Roman Army to conquer, maintain and administer a large chunk of the known world in middle age.

Scalability

How the above techniques can be applied at scale

The last question in this series would be to know if the above-described approach can scale. First, the key to scalability is to build things that do not scale first. Then when scalability becomes a concern, figure out how to address those concerns. So, the first iteration would be supposed to not be scalable.

Since the index is available to every directory, and the index role becomes to expose directory content to the outer world, it doe not matter if the directory count yields 1 or 100 or 1000+. A simple call to the parent directory makes it possible to have access to 1, 100, or 1000+ libraries.

From this vantage point, introduction of the index at every level of the directory comes with scalability as a “cherry on top of the cake”.

Where to go from here

This post focused on the theoretical side of the modularization business. The next step is to put techniques described therein put to test in the next blog post.

Conclusion

Modularization is a key strategy to crafting reusable composable software components. It brings elegance to the codebase, reduces copy/paste occurrences(DRY), improves performance, and makes the codebase testable. Modularization reduces the complexity associated with large-scale nodejs applications.

In this article, we revisited how to increase key layers testability, by leveraging basic modularization techniques. Techniques discussed in this article, are applicable to other aspects of the nodejs application. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss