Simple Engineering

The configuration is one of the software component layers, and as such, should be testable in isolation like any other component. Modularization of the configuration layer improves its reusability and testability. The question we should be asking is How do we get there, and that is the objective of this article.

The 12 App Factor, a collection of good practices, advocates for “strict separation of configuration from code” and “storing configuration in environment variables”, among other things.

The 12 App Factor challenges the status quo, when it comes to configuration management. The following paragraph taken verbatim from the documentation is a clear illustration of that fact.

“A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.” ~ verbatim text from 12 App Factor ~ config section

In this article we will talk about:

  • Differentiation of configuration layers
  • How to decouple code from configuration
  • How to modularize configuration for testability
  • How to prevent configurations key leaks in public space

Techniques and ideas discussed in this blog, are available in more detail in “Configurations” chapter of the “Testing nodejs Applications” book. You can grab a copy on this link.

Show me the code

const Twitter = require('twitter');

function TwitterClient() {

    this.client = new Twitter({
        consumer_key: `Plain Text Twitter Consumer Key`,
        consumer_secret: `Plain Text Twitter Consumer Secret`,
        access_token_key: `Plain Text Twitter Access Token Key`,
        access_token_secret: `Plain Text Twitter Access Token Secret`
    });

    //accounts such as : @TechCrunch, @Twitter, etc 
    this.track = Array.isArray(accounts) ? accounts.join(',') : accounts;
    //ids: corresponding Twitter Accounts IDs 816653, 783214, etc  
    this.follow = Array.isArray(ids) ? ids.join(',') : ids;
}

/**
 * <code>
 * let stream = new TwitterClient('@twitter', '783214').getStream();
 * stream.on('error', error => handleError(error));
 * stream.on('data', tweet => logTweet(tweet));
 * </code>
 * @name getStream - Returns Usable Stream
 * @returns {Object<TwitterStream>}
 */
TwitterClient.prototype.getStream = function(){
    return this.client.stream('statuses/filter', {track: this.track, follow: this.follow});
};

Example:

What can possibly go wrong?

When trying to figure out how to approach modularizing of configurations, the following points may be a challenge:

  • Being able to share the source code without leaking public keys to the world
  • Laying down a strategy to move configurations into configuration files
  • Making configuration settings as testable as any module.

The following sections will explore more on making points stated above work.

Layers of configuration of nodejs applications

Although this blog article provides basic understanding of configuration modularization, it defers configuration management to another blog post: Configuring nodejs applications”.

From a production readiness perspective, at least in the context of this blog post, there are two distinct layers of application configurations.

The first layer consists of configurations that nodejs application needs to execute intrinsic business logic. They will be referred to as environment variables/settings. Third-party issued secret keys or server port number configurations, fall under this category. In most cases, you will find such configurations in static variables found in the application.

The second layer consists of configurations required by a system that is going to host the nodejs application. Database server settings, monitoring tools, SSH keys, and other third-party programs running on the hosting entity, are few examples that fall under this category. We will refer to these as system variables/settings.

This blog will be about working with the first layer: environment settings.

Decoupling code from configuration

The first step in decoupling configuration from code is to identify and normalize the way we store our environment variables.

module.exports = function hasSecrets(){
    const SOME_SECRET_KEY = 'xyz=';
    ...
};

Example: function with an encapsulated secret

The previous function encapsulates secret values that can be moved outside the application. If we apply this technique, SOME_SECRET_KEY will be moved outside the function, and imported whenever needed instead.

const SOME_SECRET_KEY = require("./config").SOME_SECRET_KEY;

module.exports = function hasSecrets(){
    ...
};

Example: function with a decoupled secret value

This process has to be repeated all over the application, till every single secret value is replaced with its constant equivalent. It doesn't have to be good on the first try, it has simply to work. We can make it better later on.

Configuration modularization

For curiosity's sake, how does the config.js looks like, after “decoupling configuration from code” step would look like, at the end of the exercise?

export const SOME_SECRET_KEY = 'xyz=';

Example: the first iteration of decoupling configuration from code

This step works but has essentially two key flaws:

  • In a team of multiple players, each player having its own environment variables, the config.js will become a liability. It doesn't scale that well.
  • This strategy will not prevent catastrophe of leaking the secret to the public, in case the code becomes open source.

To mitigate this, we are going to introduce After normalization of the way we store and retrieve environment variables, the next step is how to organize the results in a module. Modules are portable and easy to test.

Modularization makes it possible to test configuration in isolation. Yes, we will have to prove to ourselves it works, before we convince others that it does!

Measures to prevent private key leakage

The first line of defense when it comes to preventing secret keys from leaking to the public is to make sure not a single private value is stored in the codebase itself. The following example illustrates this statement.

module.exports = function leakySecret(){
    const SOME_SECRET_KEY = 'xyz=';
    ...
};

Example: function with a leak-able secret key

The second line of defense is to decouple secret values from an integral part of the application, and use an external service to provision secret values at runtime. nodejs makes it possible to read the process content.

A simple yet powerful tool is dotenv library. This library can be swapped, depending on taste or project requirements.

One of the alternatives to dotenv includes convict.js.

Last but not least, since we are using git, to add .gitignore prevents contributors to commit their .env files by accident to the shared repository.

dotenv-extended makes it possible to read *nix variables into a dotenv file.

require('dotenv').config();
const Twitter = require('twitter');

function TwitterClient(accounts, ids) {
    this.client = new Twitter({
        consumer_key: process.env.TWITTER_CONSUMER_KEY,
        consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
        access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
        access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET
    });
    ...
}

Example: preventing .env files from being checked into the central repository

Conclusion

Modularization is key to crafting re-usable composable software components. The configuration layer is not an exception to this rule. Modularization of configurations brings elegance, ease of management of critical information such as security keys.

In this article, we re-asserted that with a little bit of discipline, without breaking our piggy bank, it is still possible to better manage application configurations. Modularization of configuration makes it possible to reduce the risk of secret key leaks as well increasing testability readiness. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #nodejs #configuration

WebSocket protocol is an extension to the HTTP protocol that makes near real-time communication magic a reality. Adding this capability to an already complex application does not make large-scale applications any easier to work with. Using modularization techniques to decoupling the real-time portion of the application makes maintenance a little easier. The question is How do we get there?. This article applies modularization techniques to achieve that.

There is a wide variety of choice to choose from when it comes to WebSocket implementation in nodejs ecosystem. For simplicity, this blog post will provide examples using socket.io, but ideas expressed in this blog are applicable to any other nodejs WebSocket implementation.

In this article we will talk about:

  • How to modularize WebSocket for reusability
  • How to modularize WebSocket for testability
  • How to modularize WebSocket for composability
  • The need for a store manager in a nodejs WebSocket application
  • How to integrate redis in a nodejs WebSocket application
  • How to modularize redis in a nodejs WebSocket application
  • How to share session between an HTTP server and WebSocket server

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//in server.js 
var express = require('express'); 
var app = express();
...
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});
...
var server = require('http').createServer(app);
server.listen(app.get('port'), () => console.log(`Listening on ${ process.env.PORT || 8080 }`));
var wss = require('socket.io')(server);
//Handling realtime data
wss.on('connection|connect', (socket, event) => {
    socket.on('error', () => {});
    socket.on('pong', () => {this.isAlive = true;});
    socket.on('disconnect', () => {});
    socket.on('message', () => {});
});

What can possibly go wrong?

The following points may be a challenge when modularizing WebSocket nodejs applications:

  • WebSocket handlers are tightly coupled to the rest of the application. The challenge is how to reverse that.
  • How to modularize for optimal code reuse and easy testability

The following sections will explore more on making points stated above work.

How to modularize WebSocket for reusability

When looking at the WebSocket handlers, there is something that strikes our eyes. Every handler has a signature that looks like any event handler common in the JavaScript ecosystem. We also realize that handlers are tightly coupled to the WebSocket object. To break the coupling, we can apply one technique: eject handlers from WebSocket, inject WebSocket and Socket objects whenever possible(composition).

How to modularize WebSocket for testability

As noted earlier, the WebSocket event handlers are tightly coupled to the WebSocket object. Mocking the WebSocket object comes with a hefty price: to lose implementation of the handlers. To avoid that, we can tap into two techniques: eject handlers, and loading the WebSocket via a utility library.

How to modularize WebSocket for composability

It is possible to shift the perspective on the way the application is adding WebSocket support. The question we can also ask is: Is it possible to restructure our code, in such a way that it requires only one line of code to wipe out the WebSocket support?. An alternative question would be: Is it possible to add WebSocket support to the base application, only using one line of code?. To answer these two questions, we will tap into a technique similar to the one we used to mount app instance to a set of routers(API for example)

The need of a store manager in a nodejs WebSocket application

JavaScript, for that matter nodejs, is a single-threaded programming language.

However, that does not mean that parallel computing is not feasible. The threading model can be replaced with a process-based model when it comes to parallel computing. This enhancement comes with an additional challenge: How to make it possible for processes to communicate or share data, especially when processes are running on two separate CPUs.

The answer is using a third-party process/es that handles inter-process communications. Key stores are good examples that make the magic possible.

How to integrate redis in a nodejs WebSocket application

redis comes with an expressive API that makes it easy to integrate with an existing nodejs application.

It makes sense to question the approach used while adding this capability to the application. In the following example, any message received on the wire will be logged into the shared redis key store.

All subscribed message listeners will then be notified about an incoming message. In the event there is a response to send back, the same approach will be followed, and the listener will be responsible to send the message again down the wire. This process may be repetitive, but it is one of the good ways to handle this kind of scenario.

There is an entire blog dedicated to modularizing redis clients here

How to modularize redis in a nodejs WebSocket application

The example of integration with redis in nodejs application is tightly coupled to redis event handlers. Ejecting handlers can be a good starting point. Grouping ejected handlers in a module can follow suit. The next step in modularization can be composing(inject redis) on the resulting modules when needed.

How to share sessions between the HTTP server and WebSocket server.

If we look closer, especially when dealing with namespaces, we find a similarity between HTTP requests(handled by express in our example) and WebSocket messages(handled by socket.io in our example). For applications that require authentication, or any other type of session on the server-side, it would be not necessary to have one authentication per protocol. To solve this problem, we will rely on a middleware that passes session data between two protocols.

Modularization reduces the complexity associated with large scale node js applications in general. We assume thatsocket.io/expressjs` applications won't be an exception in the current context. In a real-time context, we focus on making most parts accessible to be used by other components and tests.

Express routes use socket.io instance to deliver some messages Structure of a socket/socket.io enabled application looks like following:

//module/socket.js
//server or express app instance 
module.exports = function(server){
  var io = socket();
  io = io.listen(server);
  io.on('connect', fn); 
  io.on('disconnect',fn);
};
    
//in server.js 
var express = require('express'); 
var app = express();
var server = require('http').createServer('app');

//Application app.js|server.js initialization, etc. 
require('module/socket.js')(server);       
        

For socket.io app to use same Express server instance or sharing route instance with socket.io server

//routes.js - has all routes initializations
var route = require('express').Router();
module.exports = function(){
    route.all('',function(req, res, next){ 
    	res.send(); 
    	next();
 });
};

//socket.js - has socket communication code
var io = require('socket.io');
module.exports = function(server){
  //server will be provided by the calling application
  //server = require('http').createServer(app);
  io = io.listen(server);
  return io;
};

Socket Session sharing

Sharing session between socket.io and Express application

//@link http://stackoverflow.com/a/25618636/132610
//Sharing session data between `socket.io` and Express 
sio.use(function(socket, next) {
    sessionMiddleware(socket.request, socket.request.res, next);
});

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings not only elegance but makes copy/paste detectors happy, and at the same time improves both performance and testability.

In this article, we revisited how to aggregate WebSocket code into composable and testable modules. The need to group related tasks into modules involves the ability to add support of Pub/Sub on demand and using various solutions as project requirements evolve. There are additional complimentary materials in the “Testing nodejs applications” book.

References + Reading List

tags: #snippets #code #annotations #question #discuss

The ever-growing number of files does not spare test files. The number of similar test double files can be used as an indication of a need to refactor or modularize, test doubles. This blog applies the same techniques we used to modularize other layers of a nodejs application, but in an automated testing context.

In this article we will talk about:

  • The need to have test doubles
  • How utilities library relates to fixtures library
  • Reducing repetitive imports via a unified export library
  • How to modularize fixtures of spies
  • How to modularize fixtures of mock data
  • How to modularize fixtures of fakes
  • How to modularize fixtures of stubs
  • How to modularize test doubles for reusability
  • How to modularize test doubles for composability

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var should = require('should');
var expect = require('expect');
var chai = require('chai');

Example:

What can possibly go wrong?

The following points may be a challenge when modularizing test doubles:

  • Some testing libraries share dependencies with the project they are supposed to tests
  • Individual test doubles can be replicated in multiple places
  • With this knowledge, How can we reduce the waste and reuse most of the dependencies?

In the next sections, we make a case on modularization for reusability as a solution to reduce code duplication.

The Status Quo

Every test double library is in fact an independent library. That remains true even when some libraries are bundled and shipped together, as is the case for chai(ships with should and expect). Every mock, every spy, and every stub we make in one place can potentially be replicated to multiple other places that test similar code-blocks, or code-blocks that share dependencies.

One of the solutions to share common test double configurations across multiple test cases is to organize test doubles in modules.

The need to have test doubles in tests.

In these series, there is one blog that discusses the difference between various test doubles: spy/mock/stubs/fake and fixtures. For the sake of brevity, that will not be our concern for the moment. Our concern is to reflect on the why we should have test doubles in the first place.

From the time and cost perspective, It takes longer to load one single file. It would take even longer to load multiple files, be in parallel or sequentially. The higher the number of test cases spanning multiple files, the slower the test runner process will take to complete execution. This adds more execution time, to an already slow process.

If there is one of amongst other improvements that would save us time, reusing the same library quite often while mimicking implementation of other things we don't really need to load(mocking/etc.), would be one of them.

Testing code acts as a state machine, or pure functions, every input results in the same output. Test doubles are essentially tools that can help us save time and cost as a drop-in replacement of expected behaviors.

How utilities relate to fixtures

In this section, we pause a little bit to answer the question: “How utilities library relates to fixtures library”.

Utility libraries(utilities) provide some tools that are not necessarily related to the core business of the program, but necessary to complete a set of tasks. The need to have utilities is not limited to business logic only, but also to testing code. In the context of tests, the utilities are going to be referred to as fixtures. Fixtures can have computations or data that emulates a state under which the program has to be tested.

Grouping imports with unified export library

The module system provided by the nodejs is a double-edged sword. It presents opportunities to create granular systems, but repetitive imports weakness the performance of the application.

To reduce repetitive imports, we make good use of the index. This compensates for our rejection to attach modules to the global object. It also makes it possible to abstract away the file structure: one doesn't have to know the whole project's structure to import just one single function.

How to modularize fixtures of spies

The modularization of spies takes one step in general. Since the spies already have a name, It makes sense to group them under the fixture library, by category or feature, and export the resulting module. The use of the index file makes it possible to export complex file systems via one single import(or export depending on perspective).

How to modularize fixtures of mock data

Mock data is the cornerstone to simulate desired test state when one kind of data is injected into a function/system. Grouping related data under the same umbrella makes sense in most cases. After the fact, it makes sense to manifest data via export constructs.

How to modularize fixtures of fakes

Fakes are functions similar to implementation they are designed to replace, most of the time third-party functionality, that can be used to simulate original behavior. When two or more fakes share striking similarities, they become good candidates for mergers, refactoring, and modularization.

How to modularize fixtures of stubs

Stubs are most of the time taken as mocks. That is because they tend to operate in similar use cases. A stub is a fake that replaces real implementations, and capable of receiving and producing a pre-determined outcome using mock data. The modularization will take a single step, in case the stub is already named. The last step is to actually export and reveal/expose the function as an independent/exportable function.

How to modularize test doubles for reusability

Test doubles are reusable in nature. There is no difference between designing functions/classes and test doubles for reusability per se. To be able to reuse a class/function, that function has to be exposed to the external world. That is where export construct comes into the picture.

How to modularize test doubles for composability

The composability on the other side is the ability for one module to be reusable. For that to happen, the main client that is going to be using the library has to be injected into the library, either via a thunk or similar strategy. The following example shows how two or more test doubles can be modularized for composability.

Some Stubbing questions we have to keep in mind – How do Stubbing differ from Mocking – How to Stubbing differs from Spying: Spies/Stubs functions with pre-programmed behavior – How to know if a function has been called with a specific argument?: For example: I want to know the res.status(401).send() — more has been discussed in this blog as well: spy/mock/stubs/fake and fixtures

Making chai, should and expect accessible

The approach explained below makes it possible to make pre-configured chai available in a global context, without attaching chai explicitly to the global Object.

  • There are multiple ways to go with modularization, but the most basic is using exports.
  • This technique will not make any library default but is designed to reduce the boilerplate when testing.
var chai = require('chai');
module.exports.chai = chai; 
module.exports.should = chai.should; 
module.exports.expect = chai.expect; 

Example:

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings elegance, improves performance, and in this case, re-usability of test doubles across the board.

In this article, we revisited how to test double modularization can be achieved by leveraging the power of module.exports( or export in ES7+). The ever-increasing number of similar test double instances make them good candidates to modularize, at the same time makes it is imperative that the modularization has to be minimalistic. That is the reason why we leveraged the index file to make sure we do not overload already complex architectures. There are additional complimentary materials in the “Testing nodejs applications” book, on this very same subject.

References

tags: #snippets #nodejs #spy #fake #mock #stub #test-doubles #question #discuss

Testing functions attached to objects, other than a class instance, constitutes an intimidating edge case from first sight. Such objects range from object literals to modules. This blog explores some test doubles techniques to shine a light on such cases.

For context, the difference between a function and a method, is that a method is a function encapsulated into a class.

In this article we will talk about:

  • Key difference between a spy, stub, and a fake
  • When it makes sense a spy over a stub

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var fs = require('fs');

module.exports.removeUserPhoto = function(req, res, next){
    let filepath = `/path/to/photos/${req.params.photoId}.jpg`;
    fs.unlink(filepath, (error) => {
        if(error) return next(error);
        return res.status(200).json({
            message: `Your photo is removed - Photo ID was ${req.params.photoId}`
        });
    });    
}

Example: A simple controller that takes a PhotoID and deletes files associated to it

What can possibly go wrong?

Some challenges when mocking chained functions:

  • Stubbing a method, while keeping original callback behavior intact

Show me the tests

From the How to mock chained functions article, there are three relevant to the current context avenues we leverage for our mocking strategy.


let outputMock = { ... };
sinon.stub(obj, 'func').returns(outputMock);
sinon.stub(obj, 'func').callsFake(function fake(){ return outputMock; })
let func = sinon.spy(function fake(){ return outputMock; });

We can put those approaches to test in the following test case

var sinon = require('sinon');
var assert = require('chai').assert;

// Somewhere in your code. 
it('#fs:unlink removes a file', function () {
    this.fs = require('fs');
    var func = function(fn){ return fn.apply(this, arguments); };//mocked behaviour 
    
    //Spy + Stubbing fs.unlink function, to avoid a real file removal
    var unlink = sinon.stub(this.fs, "unlink", func);
    assert(this.fs.unlink.called, "#unlink() has been called");

    unlink.restore(); //restoring default function 
});

Conclusion

In this article, we established the difference between stub/spy and fake concepts, how they work in concert to deliver effective test doubles, and how to leverage their drop-in-replacement capabilities when testing functions.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

Mocking and Stubbing walk hand in hand. In this blog, we document stubbing functions with promise constructs. The use cases are going to be based on Models. We keep in mind that there is a clear difference between mocking versus stub/spying and using fakes.

In this article we will talk about:

  • Stub a promise construct by replacing it with a fake
  • Stub a promising construct by using third-party tools
  • Mocking database-bound input and output

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


//Lab Pet
window.fetch('/full/url/').then(function(res){ 
    service.doSyncWorkWith(res); 
    return res; 
}).catch(function(err){ 
    return err;
});

Example:

What can possibly go wrong?

When trying to figure out how to approach stub functions that return a promise, the following points may be a challenge:

  • How to deal with the asynchronous nature of the promise.
  • Making stubs drop-in replacements of some portion of the code block, and leave intact anything else.

The following sections will explore more on making points stated above work.

Content

  • From Johnny Reeves Blog: Stub the services' Async function, then return mocked response

var sinon = require('sinon');
describe('#fetch()', function(){
    before(function(){ 
        //one way
        fetchStub = sinon.stub(window, 'fetch').returns(bakedPromise(mockedResponse));
        //other way
        fetchStub = sinon.stub(window, 'fetch', function(options){ 
            return bakedPromise(mockedResponse);
        });
        //other way
        fetchStub = sinon.stub(window, 'fetch').resolves(mockedResponse);

    });
    after(function(){ fetchStub.restore(); });
    it('works', function(){
        //use default function like nothing happened
        window.fetch('/url');
        assert(fetchStub.called, '#fetch() has been called');
        //or 
        assert(window.fetch.called, '#fetch() has been called');
    });
    it('fails', function(){
            //one way
        fetchStub = sinon.stub(window, 'fetch', function(options){ 
            return bakedFailurePromise(mockedResponse);
        });
        //another way using 'sinon-stub-promise's returnsPromise()
        //PS: You should install => npm install sinon-stub-promise
        fetchStub = sinon.stub(window, 'fetch').returnsPromise().rejects(reasonMessage);

    });
});

Example:

  • bakedPromise() is any function that takes a Mocked(baked) Response and returns a promise
  • This approach doesn't tell you if Service.doJob() has been expected. For That:
  • source
  • source

Conclusion

In this article, we established the difference between Promise versus regular callbacks and how to stub promise constructs, especially in database operations context, and replacing them with fakes. Testing tends to be more of art, than science, proactive makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

The stream API provides a heavy-weight asynchronous computation model that keeps a small memory footprint. As exciting as it may sound, testing streams is somehow intimidating. This blog layout some key elements necessary to be successful when mocking stream API.

We keep in mind that there is a clear difference between mocking versus stub/spying/fakes even though we used mock interchangeably.

In this article we will talk about:

  • Understanding the difference between Readable and Writable streams
  • Stubbing Writable stream
  • Stubbing Readable stream
  • Stubbing Duplex or Transformer streams

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var  gzip = require('zlib').createGzip();//quick example to show multiple pipings
var route = require('expressjs').Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
        let rstream = fs.createReadStream('./several-TB-of-songs.json'); 
        rstream.
            pipe(new MetadataStreamTransformer()).
            pipe(gzip).
            pipe(res);
        // forwaring the error to next handler     
        rstream.on('error', (error) => next(error, null));
});

At a glance The code is supposed to read a very large JSON file of TB of metadata about songs, apply some transformations, gzip, and send the response to the caller, by piping the results on the response object.

The next example demonstrates how a typical transformer such as MetadataStreamTransformer looks like

const inherit = require('util').inherits;
const Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    this.options = Object.assign({}, options, {objectMode: true});//<= re-enforces object mode chunks
    Transform.call(this, this.options);
}
inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};

Inheritance as explained in this program might be old, but illustrates good enough in a prototypal way that our MetadataStreamTransformer inherits stuff fromStream#Transformer

What can possibly go wrong?

stubbing functions in stream processing scenario may yield the following challenges:

  • How to deal with the asynchronous nature of streams
  • Identify areas where it makes sense to a stub, for instance: expensive operations
  • Identifying key areas needing drop-in replacements, for instance reading from a third party source over the network.

Primer

The keyword when stubbing streams is:

  • To identify where the heavy lifting is happening. In pure terms of streams, functions that executes _read() and _write() are our main focus.
  • To isolate some entities, to be able to test small parts in isolation. For instance, make sure we test MetadataStreamTransformer in isolation, and mock any response fed into .pipe() operator in other places.

What is the difference between readable vs writable vs duplex streams? The long answer is available in substack's Stream Handbook

Generally speaking, Readable streams produce data that can be feed into Writable streams. Readable streams can be .piped on, but not into. Readable streams have readable|data events, and implementation-wise, implement ._read() from Stream#Readable interface.

Writable streams can be .piped into, but not on. For example, res examples above are piped to an existing stream. The opposite is not always guaranteed. Writable streams also have writable|data events, and implementation-wise, implement _.write() from Stream#Writable interface.

Duplex streams go both ways. They have the ability to read from the previous stream and write to the next stream. Transformer streams are duplex, implement ._transform() Stream#Transformer interface.

Modus Operandi

How to test the above code by taking on smaller pieces?

  • fs.createReadStream won't be tested, but stubbed and returns a mocked readable stream
  • .pipe() will be stubbed to return a chain of stream operators
  • gzip and res won't be tested, therefore stubbed to returns a writable+readable mocked stream objects
  • rstream.on('error', cb) stub readable stream with a read error, spy on next() and check if it has been called upon
  • MetadataStreamTransformer will be tested in isolation and MetadataStreamTransformer._transform() will be treated as any other function, except it accepts streams and emits events

How to stub stream functions

describe('/songs', () => {
    before(() => {
        sinon.stub(fs, 'createReadStream').returns({
            pipe: sinon.stub().returns({
                pipe: sinon.stub().returns({
                    pipe: sinon.stub().returns(responseMock)
                })
            }),
            on: sinon.spy(() => true)
        })
    });
});

This way of chained stubbing is available in our toolbox. Great power comes with great responsibilities, and wielding this sword may not always be a good idea.

There is an alternative at the very end of this discussion

The transformer stream class test in isolation may be broken down to

  • stub the whole Transform instance
  • Or stub the .push() and simulate a write by feeding in the readable mocked stream of data

the stubbed push() is a good place to add assertions

it('_transform()', function(){
    var Readable = require('stream').Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, 'push', function(data){
        assert.isNumber(data.id);//testing data sent to callers. etc
        return true;
    });
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    rstream.pipe(tstream);
    expect(tstream.push.called, '#push() has been called');
    mockPush.restore(); 
});

How to Mock Stream Response Objects

The classic example of a readable stream is reading from a file. This example shows how mocking fs.createReadStream and returns a readable stream, capable of being asserted on.

//stubb can emit two or more streams + close the stream
var rstream = fs.createReadStream();
sinon.stub(fs, 'createReadStream', function(file){ 
    //trick from @link https://stackoverflow.com/a/33154121/132610
    assert(file, '#createReadStream received a file');
    rstream.emit('data', "{id:1}");
    rstream.emit('data', "{id:2}");
    rstream.emit('end');
    return false; 
});

var pipeStub = sinon.spy(rstream, 'pipe');
//Once called this above structure will stream two elements: good enough to simulate reading a file.
//to stub `gzip` library: another transformer stream: producing 
var next = sinon.stub();
//use this function| or call the whole route 
getter(req, res, next);
//expectations follow: 
expect(rstream.pipe.called, '#pipe() has been called');

Conclusion

In this article, we established the difference between Readable and Writable streams and how to stub each one of them when unit test.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #TDD #streams #nodejs #mocking

Sometimes, changes in code involves changes in models. Fields can be added or removed depending on the requirements at hand. This blog post explores some techniques to make versioning work with mongodb models.

There is a more generalist Database Maintenance, Data Migrations, and Model Versioning article that goes beyond mongodb models.

In this article we will talk about:

  • Model versioning strategies
  • Avoiding model versioning colliding with database engine upgrades
  • Migration strategy for model upgrades with schema change
  • Migration strategy for models with hydration
  • Tools that makes model migrations easier

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

These snippets illustrate the evolution of one fictitious UserSchema. A schema describes how a model will look like once compiled and ready to be used with the mongodb database engine.

//Data Model Version 1.0.0
var UserSchema = new mongoose.Schema({name: String});

//Data Model Version 1.0.1
var UserSchema = new mongoose.Schema({name: String, email: String});

//Data Model Version 1.1.0
var UserSchema = new mongoose.Schema({
    name: String, 
    email: {type: String, required: true}
});

//Data Model Version 2.0.0
var UserSchema = new mongoose.Schema({ 
    name: {first: String, last: String},
    addresses: [Address],
    orders: [Order]
});

module.exports = mongoose.model('User', UserSchema);

Example: Evolution of a mongoose data model

What can possibly go wrong?

It common to execute software updates in a bulk, especially when the application is a monolith. The term bulk is used for lack of a better word, but the idea behind it can be resumed in a need to update data models, coupled with data hydration to new data models, with a potential of updating the database engine, all of those tasks at the same time.

It becomes clear that, when we have to update more than two things at the same time, complex operations will get involved, and the more complex the update gets, the nastier the problem will become.

When trying to figure out how to approach either migration from one model version to the next, from one low/high level ORM/ODM(mongoose, knex, sequelize) version to the next, from one database engine version to the next, or from one database driver version to the next, we should always keep in our mind some of these challenges(Questions):

  • When is it the right time to do a migration
  • How to automate data transformations from one model version to the next
  • What is the difference between update, and upgrade in our particular context
  • What are the bottlenecks(moving parts) for the current database update/upgrade
  • How can we align model versioning, data migrations alongside database updates/upgrades/patches

The key strategy to tackle difficult situations, at least in the context of these blog post series, has been to split big problems into sub-problems, then resolve one sub-problem at a time.

Update vs Upgrade

Database updates and patches are released on regular basis, they are safe and do not cause major problems when the time comes to apply them. From a system maintenance perspective, it makes sense to apply patches as soon as they come out, and on a regular repeatable basis. For example, every week Friday at midnight, a task can apply a patch to the database engine. At this point, there is one issue off our plate. How about database upgrades.

Upgrades

Avoiding model versioning colliding with other database-related upgrades ~ Any upgrade has breaking changes in it, some are minor others are really serious such as data format incompatibility and what-not. Since upgrades can cause harm, it makes sense to NOT do upgrades at the same time with model versioning, or worse, data model versioning. We may state the following upgrades: ORM/ODM, database driver, database engine upgrades. Since they are not frequent, they can be planned once every quarter depending on the schedule of software we are talking about. It makes sense to have a window to execute, test, and adapt if necessary. Once a quarter as a part of sprint cleaning makes more sense. As a precaution, it makes sense to NOT plan upgrades at the same time as model version changes.

Model versioning strategies

As expressed in the sample code, the evolution of data-driven applications goes hand in hand with schema evolution. As the application grows, some decisions are going to be detrimental and may also need corrective measures in further iterations. We keep in mind that some new features require revisiting schema. In all cases, the model schema will have to change to adapt to new realities. The complexity of schema change depends on how complex the addition or removal turns out to be. To reduce complexity and technical debt, every deployment should involve steps to update schema changes, and re-hydrate data into new models to reflect the new changes. When possible, features that require schema change can be moved to a minor(Major.Minor.Patch) release, whereas every day (in continuous delivery mode) release can be just patched. Similarly, the Major version releases can include ORM/ODM upgrades, database driver upgrades, database engine upgrades, data migration from an old system to the new system. It is NOT good to include model changes in the major, we can keep that in minor releases.

Migration strategy for model upgrades with schema change

From previous sections, it makes sense to keep model upgrades, with schema change as a minor release task. And that, whether it implies data hydration or not.

Migration strategy for models upgrades with data hydration

Data hydration is necessary, in case the data structure has changed to remove fields, split fields, or adding embedded documents. Data hydration may not be necessary when schema change is relaxed validity or availability. However, if a field becomes required, then it makes sense to add a rehydration strategy. It is better to execute hydration every time there is a minor release, even when not necessary.

Tools that make model upgrade easy

There are some libraries that can be used to execute data migration/hydration as a part of model upgrade operation. node-migrate is one of them. Advanced tools on relational databases such as flywaydb can be used. When it comes to model upgrades, a consistent repeatable strategy pays more for your buck than a full-fledged solution in the wild.

Conclusion

In this article, we revisited how to align schema versioning with mongodb releases, taking into consideration data migration and hydration, as well as tools to make data handling easier. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #mongodb #mongoose #migration #data-migration #model-migration #nodejs

The idea to write this post stems from reading some interesting bug reports from various OSS on Github issues. Instead of prescribing what should be considered the right or the wrong bug reporting, this post takes a different turn, and focus on asking basic questions, we should be able to answer when filing a new bug report.

At the end of the reading, you will have inspirations on how to make key daily improvements when filing bug reports, to direct developers in the right direction when reading and resolving issues being reported.

In this article we will talk about:

  • Bug report based on GivenWhenThen user story formula
  • Bug report for new and regression cases

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

What to expect from a bug report

When writing a bug report blueprint, we will have to answer thoroughly following questions in plain English.

The final product has is mold bug reports that have to fit in or blueprint our bug reports have to built around.

To start this exercise, let's take our typical bug report, and verify if it answers the questions below.

  • How much does a typical bug costs
  • Is it a bad idea to have developers do their own QA
  • Is it a good idea to have developers do their own QA
  • What is the optimal QA to Dev ratio
  • What developers love to read in a bug report
  • What makes developers hate reading a bug report
  • What should go into a bug report
  • What should not go into a bug report
  • Why good bug reporting communication matters
  • Can linking bugs to relevant user story acceptance criteria help fix bugs faster
  • How can linking bugs to relevant user story acceptance criteria be done
  • Should bug reports to have templates à l'instar of user stories
  • What a functional bug report template should like

Answers to the question stated above, provide a starting point to create a blueprint of a bug report for your organization.

Conclusion

In this article, we revisited how to write bug reports that convey a clear message on what is wrong with the system, in a way that developers may re-use the bug report as a test case in their regression automated test cases. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #bugs #bug-report #QA

This article draws inspiration from Chriss Beams' blog post on writing right commit messages. There is no consensus on how to write a test message, but there is certainly plenty of conventions around test message. At the end of the day, test messages get written according to developers' taste at a given time — this article entices curiosity around choosing the right test message format.

Like other posts that came before this blog, we will take a different turn, and focus on asking basic questions, as opposed to prescribing some magic solution to the issue.

If you haven't already, read the “How to write Test Cases developers will love”. The key difference between these two blog posts is that this blog focuses more on the semantics of a message. The other, blog focuses on the adoption of a template or school of thought when it comes to test cases.

This blog tries to present elements that can improve the messaging when writing a test case.

In this article we will talk about:

  • Definition of a message in a testing context
  • Difference between BDD and TDD test scenario message
  • Choosing the right test messages based on acceptance criteria
  • Choosing the right regression test message
  • Choosing the right message for testing class methods
  • Choosing the right messages for solo functions
  • Elements of a good testing message ~ How to spot a well-written message
  • The wrong way to write a message ~ How to spot a bad-written message
  • Example of state of the art test messages
  • Writing message that validates User Story/JTBD ~ integration/e2e/system tests

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

What to expect from test cases

When writing test messages blueprint, we will have to answer thoroughly following questions in plain English. Our final product has is a mold most of our test messages have to fit in.

To start this exercise, let's take our typical test case message, and verify if it satisfies the questions below.

  • Gather a sample of some good test messages from open source projects
  • Write down some test messaging you find commendable
  • Which one qualify as a state of the art test messages
  • What can be considered Elements of a good test message
  • How to spot a well-written message
  • What should be a definition of a message in a testing context
  • What is the difference between BDD and TDD message style, if any
  • What should be considered the right messaging for regression test cases
  • What can be seen as the right message for testing class methods
  • What can be seen as the right for solo functions
  • What can be considered the wrong way to write a message
  • How to spot bad written message
  • How to write a test message that validates User Story
  • How to write a test message that validates Job Story

Answers to these kinds of questions provide a baseline to create a blueprint of test messages developers and businesses will love reading.

Conclusion

In this article, we revisited how to write test messages that convey clear information about what is being tested. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #testing #bugs #QA

How to write test cases developers will love reading

This blog is a followup to How to write a bug report developers will love reading. It stems from mixed experience reading bug reports, unit test messages, and feature requests on various OSS on Github and developers' blog posts.

Following in the footsteps of other blogs in these series, we will focus on asking basic questions, as opposed to prescribing some magic solution, in a way that triggers a rethink on state of test cases and/or user story(JTBD).

If you haven't already, read the “How to write User Stories developers will love”

The idea is to challenge our thinking into discovering improvements on the messaging, to make test cases actually validates, and ideally, prevent bugs from happening in the first place.

In this article we will talk about:

  • Choosing the right wording when writing test cases, requires getting into how developers think and approach problem-solving.
  • The best test case depending on the best user stories. For that, the reports are based on GivenWhenThen, in case user stories are.
  • Choosing the right template for test cases, for new and regression cases
  • Writing a testing message that aligns with features
  • Writing informative testing message(on reports and logs)
  • Case > Feature > Expectations

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

What to expect from test cases

It has been said that the best way to document any code block, is via its test cases. However, it is also true that some test cases do not provide a clear messaging around what the code blocks the are about to test do, or supposed to do. That is where the reflections in this blog are supposed to address. Moreover, there is little to no convention around writing such messages, which sometimes makes reading test reports quite futile if not un-readable.

This article tends to suggest the kind of message format to follow when we need our test message to make sense for us in the future, and for first-time readers. The message is based on observations found in popular open-source frameworks. To limit the scope, our observations are going to be based on the following frameworks: mochajs, sinonjs, expressjs and nodejs.

When writing the test case blueprint, we will have to answer thoroughly following questions in plain English. Our final product has is a mold test case that has to fit in, or blueprints our bug reports have to built around.

To start this exercise, let's take our typical test case, and verify if it satisfies the questions below.

  • How much does a bug costs
  • How do you define a test case
  • What should go into a test case
  • What should NOT go into a test case
  • How can my testing messaging aligns with feature requests under test
  • How can I evaluate if my testing message is informative enough for those needing to read test reports or assertion logs
  • How to link test cases to acceptance criteria- How to model test cases around: (Use Case > Feature > Expectations)
  • How to model test cases: (Given > When > Then)
  • How to model test cases after JTDB (Jobs to be done)
  • How to model test cases after user stories
  • Is it OK to refactor test cases
  • How do go about refactoring test cases
  • Why messaging matters when crafting test cases
  • What are the pros and cons of TDD vis à vis BDD test case
  • Is it possible to use both TDD and BDD approaches in the same project

Answers to this kind of question provide a starting point to create a blueprint of test cases other developers will love.

Conclusion

In this article, we revisited how to write test cases that convey a clear message on what is wrong with the system, in a way that developers may correct the course with effective tests. As always, there are additional complementary materials in the “Testing nodejs applications” book, about how to write test in a nodejs environment.

References

tags: #bugs #bug-report #QA

Enter your email to subscribe to updates.