Simple Engineering

Testing authenticated routes sound intimidating, but the trick is simple to get it right. The right combination of mocking a session object and stubbing of the authentication middleware. This article will revisit these two key ingredients to make tests work.

In this article we will talk about:

  • Avoiding integration test trap on authenticated routes
  • Stubbing authentication middleware for faster tests
  • Mocking authentication protected routes' session data

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


// Authentication Middleware in middlwares/authenticated.js 
module.exports = function(req, res, next){
    let token = req.headers.authorization;
    let payload = jwt.decode(token, config.ssecret);
    if(!validate(payload)) return next(new Error('session expired'));
    req.user = payload.sub;//adding 
    return next();
};

//Session Object in settings/controller/get-profile  
module.exports = function(req, res, next){
    let user = req.session.user;
    UserModel.findById(user._id, (error, user) => {
        if(error) return next(error, null);
        return req.status(200).json(user); 
    });     
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getProfile = require('./settings/get-profile');
router.get('/profile/:id', authenticated, getProfile);
module.exports = router;

What can possibly go wrong?

There is a clear need to mimic the real authentication when testing expressjs authenticated routes and sometimes this need leads to an integration testing trap.

Following are other challenges we may expect along the way:

  • Avoid testing underlying libraries that provide authentication features
  • Simulate authenticated session data
  • Mock requests behind protected third-party routes, such as Payment Gateways, etc.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework. They are not a suggestion, rather the ones that made sense to complete this article:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. mocha was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete.
  • supertest framework for mocking RESTful APIs and nock for intercepting and mocking third-party HTTP requests. supertest is written on top of superagent, so we get both testing toolkits.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

The keyword in mocking a session object lies in this line, found in the example above: let user = req.session.user;. With that knowledge,


describe('getPrifile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { user: { /*...*/ } };//mocking session object
    req = { params: {id: 1234}, session: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });

});

On the other hand, since authenticated() resides on a library, it can simply be stubbed as any other function, the time comes to test the whole route: let authenticated = sinon.spy();.

Conclusion

In this article, we reviewed how testing tends to be more of an art, than science. We also stressed the fact that, like in any art, practice makes perfect.

One use case of tapping into middleware re-usability/composability and testability is the authentication middleware herein presented. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

The middleware is one of the components that improve the composability of expressjs router. This blog post approaches middleware testing from a real-world perspective. The use case is a CORS since found in almost all expressjs enabled applications.

In this article we will talk about:

  • How to mock Request/Response Objects
  • Spying if certain calls have been called
  • Make sure the requests don't leave the local machine.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The CORS middleware is one of the most used middleware in the nodejs community.

module.exports = function cors(req, res, next) {
    res.set('Access-Control-Allow-Credentials', true);
    res.set('Access-Control-Allow-Origin', '*');
    res.set('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
    res.set('Access-Control-Allow-Headers', 'X-CSRF-Token, X-CSRF-Strategy, X-Requested-With, Accept, Authorization, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version');
    res.set('Content-Type', 'application/json');
    res.set('Access-Control-Allow-Max-Age', 3600);

    return req && req.method === 'OPTIONS' ? res.send(200) : next();
};

Example: CORS middleware in lib/middleware/cors.js

Code sample is modeled from: Unit Testing Controllers the Easy Way in Express 4

What can possibly go wrong?

As is the case for routers, the following points may be other challenges when unit testing expressjs middleware:

  • Mock database read/write operations for a middleware that reads/writes from/to a database
  • Mocking read/write from/to third-party services to avoid integration testing trap

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our tiny Choosing the right tools framework, the following tools make sense in a context of this blog, when testing expressjs routes middleware:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for examples sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.

The testing stack mocha, chai and sinon worths a shot for most use cases.

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR $ nyc test mocha -- --color --reporter mocha-lcov-reporter specs

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Show me the tests

Have you ever wondered where to start from, when refactoring a code block? That is a common source of frustration and the bad decision-making that generally follows. When paying off technical debt, small bad moves can build up into catastrophe, such as having unexpected downtime with little to no failure traceability.

This blog post approaches testing of fairly large nodejs application from a real-world perspective and with refactoring in mind.

The mainstream philosophy about automated testing is to write failing tests, followed by code that resolves the failing use cases. In the real world, a writing test should start as it should follow writing code. A particular case is when dealing with untested code.

var sinon = require('sinon'), 
    chai = require('chai'), 
    expect = chai.expect, 
    cors = require('./middleware').cors, 
    req, 
    res, 
    next;
   
describe("cors()", function() {
    before(function(){
        req = {}, 
        res = { send: sinon.spy()}, 
        next = sinon.spy();
    });

    it("should skip preflight requests", function() {
        req = {method: 'OPTIONS'};//preflight requests have method === options
        cors(req, res, next);
        expect(res.send.calledOnce).to.equal(true); 
        res.send.restore();
    });     

    it('should decorate requests with CORS permissions', function() => {
        cors(req, res, next);
        expect(next.calledOnce).to.equal(true); 
        next.restore();
    });
});

Example:

Special Use Case: How to mock a response that will be used with a Streaming Source.

It worths mentioning that mocking a request object is not rocket science. An empty object, with the right methods we use in a given test, is sufficient enough to assert whether areas of our interest are covered.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike.

In this article, we reviewed how testing tends to be more of an art, than science. We also stressed the fact that, like in any art, practice makes perfect. One way this idea may be reflected in real life is by testing middleware as an isolated reusable, composable component that the middleware constitutes. Writing a good meaningful testing message is pure art.

There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.

In this article we will talk about:

  • Mocking the Server instance
  • Mocking Route's Request/Response objects
  • Modularization of routes and revealing server instance
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getUsers = require('./users/get-user');
router.get('/users/:id', authenticated, getUser);
module.exports = router;

What can possibly go wrong?

When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:

  • Routes should be served at any time while testing
  • Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)
  • Mocking request/response objects to wipe need of a server out of the picture

Testing routes without spinning up a server

The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.


var app = require('express').express(),
  request = require('./support/http');

describe('req .route', function(){
  it('should serve on route /user/:id/edit', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.path).to.equal('/user/:id/edit');
      res.end();
    });

    request(app)
      .get('/user/12/edit')
      .expect(200, done);
  });
  it('should serve get requests', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.method).to.equal('get');
      res.end();
    });

    request(app)
    .get('/user/12/edit')
    .expect(200, done);
  });
});

Example:

example from so and supertest. supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a reasonable alternative. request = require('./support/http') is the utility that may use either of those two libraries to provide a request.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.
  • supertest, nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR "nyc test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Conclusion

To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.

Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.

Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tdd #testing #nodejs #expressjs #server

In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.

In this article we will talk about:

  • Mocking the Server instance
  • Mocking Route's Request/Response objects
  • Modularization of routes and revealing server instance
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getUsers = require('./users/get-user');
router.get('/users/:id', authenticated, getUser);
module.exports = router;

What can possibly go wrong?

When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:

  • Routes should be served at any time while testing
  • Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)
  • Mocking request/response objects to wipe need of a server out of the picture

Testing routes without spinning up a server

The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.


var app = require('express').express(),
  request = require('./support/http');

describe('req .route', function(){
  it('should serve on route /user/:id/edit', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.path).to.equal('/user/:id/edit');
      res.end();
    });

    request(app)
      .get('/user/12/edit')
      .expect(200, done);
  });
  it('should serve get requests', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.method).to.equal('get');
      res.end();
    });

    request(app)
    .get('/user/12/edit')
    .expect(200, done);
  });
});

Example:

example from so and supertest. supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a reasonable alternative. request = require('./support/http') is the utility that may use either of those two libraries to provide a request.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.
  • supertest, nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR "nyc test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Conclusion

To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.

Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.

Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tdd #testing #nodejs #expressjs #server

This blog post approaches testing fairly large nodejs application from a real-world perspective and with refactoring in mind. The use cases address advanced concepts that testing expressjs routes are.

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike.

In this article we will talk about:

  • Healthy test coverage of routes
  • Modularization of routes for testability
  • Mock Route's Request/Response Objects when necessary
  • Mock requests to third-party endpoints such as Payment Gateway.

Additional challenges while testing expressjs Routes*

  • Test code, not the output
  • Mock requests to Payment Gateway, etc.
  • Mock database read/write operations
  • Be able to cover exceptions and missing data structures

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getProfile = require('./settings/get-profile');
router.get('/profile/:id', authenticated, getProfile);
module.exports = router;

Example:

What can possibly go wrong?

When (unit) test expressjs routes the following challenges may arise:

  • Drawing a line between tests that fall into the unit testing category versus those tests that fall into the integration testing camp.
  • Being mindful that authenticated routes can appeal in the picture
  • Mock database read/write operations, or other layers(controller/service) that are not critical (core) to validation of the route's expectations

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes:

  • We can technically have auto-reload or hot-reload using: pm2, nodemon or forever. We recommend supervisor.
  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We recommend mocha. The stack mocha, chai and sinon can be worth it as well.
  • supertest framework for mocking Restful APIs and nock for mocking HTTP.
  • Code under test is instrumented, but default reporting tools do not always suit our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the test

If you haven't already, read the “How to write test cases developers will love”

The mainstream philosophy about automated testing is to write failing tests, followed by code that resolves the failing use cases. This is not always the case, especially when dealing with legacy code, or poorly tested code. The less puritan approach is at least tests when the code is still fresh in memory.

In this article, we assume the reader knows how to mock routes, otherwise there are articles that cover the basics of mocking routes' request/response objects and how to mock database read/write functions in this blog.

The common source of frustration and sometimes bad decision-making that follows is when not able to define boundaries: when to start refactoring, and when to stop.

Testing a route handler in isolation looks like testing any function. In our case, there should be a mocking operation of the User.findById() function, that is intended to be used with the request.

For more on how to mock mongoose read/write function.

describe('getProfile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });
  
  it('fails when no profile is found', () => {
    getRequest(req, res, next);
    expect(next).toHaveBeenCalledWith([error, null]);
  });

});

Please refer to this article to learn more about how to mocking mongoose read/write functions.

Testing an integral route falls into the integration testing category. Whether we connect to a live database or use a live server route is up to the programmer, but the best(fast/efficient) approach is to mock out those two expensive parts as well.

var router = require('./profile/router'),
    request = require('./support/http');
describe('/profile/:id', () => {
  it('returns a profile', done => {
    request(router)
      .get('/profile/12')
      .expect(200, done);
  });

  it('fails when no profile is found', done => {
    request(router)
      .get('/profile/NONEXISTENT')
      .expect(500, done);
  });
});

request = require('./support/http') is the utility that may use either of supertest or dupertest provide a request.

Conclusion

When paying off technical debt, small bad moves can build up into catastrophe, such as downtime with little failure traceability. Good test coverage increase confidence when refactoring, refines boundaries, while at the same time reducing the introduction of new bugs in the codebase.

In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing routes, just like testing controllers, can be challenging when interacting with external systems is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #expressjs #routes #discuss

The majority of web applications may not need a background job, but for those that do, experience some level of shadow around testing/debugging and discovering issues before it becomes too late. This article contributes towards increasing testability and saving time for late debugging.

As it was in other blogs that preceded this one, we will explore some of the ways to make sure most of the parts are accessible for testability.

In this article we will talk about:

  • Aligning background jobs with unit test best practices
  • Mocking session data for services that need authentication
  • Mocking third party systems when testing a background job

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


//Job Definition in jobs/email.js
var email = require('some-lib-to-send-emails'); 
var User = require('./models/user.js');

module.exports = function(agenda) {
  
  agenda.define('registration email', function(job, done) {
    User.findById(job.attrs.data.userId, function(err, user) {
       if(err) return done(err);
       	var message = ['Thanks for registering ', user.name, 'more content'].join('');
      	return email(user.email, message, done);
     });
  });

  agenda.define('reset password', function(job, done) {/* ... more code*/});
  // More email related jobs
};
//triggering in route.js
//lib/controllers/user-controller.js
var app = express(),
    User = require('../models/user-model'),
    agenda = require('../worker.js');

app.post('/users', function(req, res, next) {
  var user = new User(req.body);
  user.save(function(err) {
    if(err) return next(err);
    //@todo - Schedule an email to be sent before expiration time
    //@todo - Schedule an email to be sent 24 hours
    agenda.now('registration email', { userId: user.primary() });
     return res.status(201).json(user);
  });
});

Example:

What can possibly go wrong?

When trying to figure out how to approach testing delayed asynchronous nodejs background jobs, the following points may be a challenge:

It is easy to fall into the integration testing trap when testing nodejs background jobs. Not only those jobs are asynchronous, but also are scheduled to run at a particular time. The following are additional challenges when testing nodejs background jobs in a Unit Test context.

  • Testing asynchronous jobs in a synchronous context ~ time-bound constraints may not be predictable, therefore not covered with our tests
  • Identifying and choosing the right break-point to do the mocking/stubbing
  • Mock third-party services such as Payment Gateway, etc.
  • Mock database read/write operations
  • Sticking to unit testing good practices

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing nodejs background, or scheduled, tasks:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We recommend mocha. The stack mocha, chai and sinon can be worth it as well.
  • Code under test is instrumented, but default reporting tools do not always suit our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

What should I be testing

If you haven't already, read the “How to write test cases developers will love”

Istanbul generates reports as tests progress.

# In package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# Then run the tests using 
$ npm test --coverage 

Example:

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

It is a little bit challenging to test a function that is not accessible outside its definition closure. However, making the function definition accessible from outside the library makes it possible to test the function in isolation.


describe('Jobs', () => {

  it('should define registration email', done => {
   registrationEmailTask(params, (attrs) => {
     expect(User.findById).toHaveBeenCalled(); 
     expect(email).toHaveBeenCalled();
     done();
   });
   
  });

});

Following the same footsteps, we can test the reset password task. To learn more about mocking database functions, please read this article.

There is a chapter on testing background jobs in the book, for more techniques to mock, modularize and test background jobs.

The lens to test the application from counts more at this level. A misstep makes us fall into integration testing territory, un-willingly.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike. In this article, we reviewed how testing tends to be more of art, than science.

We also stressed the fact that, like in any art, practice makes perfect ~ testing background jobs constitutes some of the challenging tasks from the asynchronous nature of the jobs. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

There is a striking similarity between testing expressjs route handlers and controllers. That similarity and test exploration is the subject matter of this article.

Few resources about testing in general address advanced concepts such as how to isolate components for better composability and healthy test coverage. One of the components that improve composability, at least in layered nodejs applications, is the controller.

In this article we will talk about:

  • Mocking controller Request/Response objects
  • Providing healthy test coverage to controllers
  • Avoiding controller integration test trap

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//Session Object in settings/controller/get-profile  
module.exports = function getPrifile(req, res, next){
    let user = req.session.user;
    UserModel.findById(user._id, (error, user) => {
        if(error) return next(error, null);
        return req.status(200).json(user); 
    });     
};

This code is a valid controller and a valid handler. There is a caveat in design that makes the case of introducing a service layer in the applications.

What can possibly go wrong?

When trying to figure out how to approach testing expressjs controllers in a Unit Test context, the following points may be a challenge:

  • How to refactor unit test at the time controller layer gets introduced, instead of route handlers.
  • Mock database read/write operations, or service layer if any, that are not core/critical to validation of the controller's expectations
  • Test-driven refactoring of the controller to adopt a service layer, to abstract the database and third-party services.

The following sections will explore more on making points stated above work.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework, we adopted the following tools (that made sense to complete current article) on testing expressjs controllers:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We chose mocha.
  • The stack mocha, chai and sinon (assertion and test doubles libraries) worth a shot.
  • supertest framework for mocking Restful APIs and nock for mocking HTTP.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

It is not always obvious why to have a controller layer in a nodejs application. When the controller is already part of the application, it may well be problematic to test it, in a way that provides value to the application as a whole, without sacrificing “time to market”.

describe('getPrifile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });
  
  it('fails when no profile is found', () => {
    getRequest(req, res, next);
    expect(next).toHaveBeenCalledWith([error, null]);
  });

});

The integration testing of the request may look a bit like in the following paragraph:

var router = require('./profile/router'),
    request = require('./support/http');
describe('/profile/:id', () => {
  it('returns a profile', done => {
    request(router)
      .get('/profile/12')
      .expect(200, done);
  });

  it('fails when no profile is found', done => {
    request(router)
      .get('/profile/NONEXISTENT')
      .expect(500, done);
  });
});

request = require('./support/http') is the utility that may use either of supertest or dupertest provide a request.

Once the above process is refined, more complex use cases can be sliced into more manageable but testable cases. The following as some of the complex use cases we can think of for now:

module.exports = function(req, res, next){
  User.findById(req.user, function(error, next){
    if(error) return next(error); 
    new Messenger(options).send().then(function(response){
      redisClient.publish(Messenger.SYSTEM_EVENT, payload));
      //schedule a delayed job 
      return res.status(200).json({message: 'Some Message'});
    });
  });
};

It may be hard to mock one single use case, with callbacks. That is where slicing, and grouping libraries into reusable services can come in handy. Once a library has a corresponding wrapper service, it becomes easy to mock the service as we wish.

module.exports = function(req, res, next){
  UserService.findById(req.user)
    .then(new Messenger(options).send())
    .then(new RedisService(redisClient).publish(Messenger.SYSTEM_EVENT, payload))
    .then(function(response){ return res.status(200).json(message);})
    .catch(function(error){return next(error);});
};

Alternatively, Using an in-memory database can alleviate the task, to mock the whole database. The other more viable way to go is to restructure the application and add a service layer. The service layer makes it possible to test all these features in isolation.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike. In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing controllers, just like testing routers, can be challenging especially when interacting with external systems is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Testing the model layer introduces a set of challenges relating to reading and writing to a database. This article clears some of the challenges to avoid side effects and makes it possible to test the model layer in isolation.

One of the components that lay the groundwork for data-driven layered applications is the model layer. However, resources about testing, in general, do not address advanced concepts such as how to isolate components for better composability and healthy test coverage.

In this article we will talk about:

  • Basics when testing models
  • Best practices around model layer unit testing.
  • Mocking read/write and third party services to avoid side effects.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//in lib/models/user.js
var UserSchema = new mongoose.Schema({name: String});
UserScheme.statics.findByName(function(name, next){
    //gives: access to Compiled Model
    return this.where({'name': name}).exec(next);
});

UserSchema.methods.addEmail(function(email, next){
    //works: with retires un-compiled model
    return this.model('User').find({ type: this.type }, cb);
});

//exporting the model 
module.exports = mongoose.model('User', UserSchema);        

//anywhere else in UserModel is used 
User.findById(id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
});

new User(options).save(function(error, user){
  if(error) return next(error);
  return next(null, user); 
});

Example: mongoose Model definition example in model/user.js

What can possibly go wrong?

When trying to figure out how to approach mocking chained model read/write functions, the following points may be a challenge:

  • Stub database read/write operations ~ finding a balance between what we want to test, versus what we want to mock
  • Mock database read/write operation outputs ~ output may not reflect reality after schema(table definition) change.
  • Cover exceptions and missing data structures ~ databases are complex systems, and we may not cover the majority of scenarios where errors/exceptions may occur
  • Avoid integration testing traps ~ the complexity of database systems makes it hard to stick to the plan and write tests that validate our actual implementation

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework, we adopted the following tools, when testing mongoose models:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We recommend mocha. The stack mocha, chai and sinon can be worth it as well.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc code name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example:

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

This blog post approaches testing of fairly large nodejs application from a real-world perspective and with refactoring in mind.

sinon stubs to simulate a response from Mongo::UserSchema::save() function, its equivalents.


describe('User', () => {
    beforeEach(() => {
        ModelSaveStub = sinon.stub(User.prototype, 'save', cb);
        ModelFindStub = sinon.stub(User, 'find', cb);
        ModelFindByIdStub = sinon.stub(User, 'findById', cb);
    });

    afterEach(() => { 
        //... 
    });
    
    it('should findByName', (done) => {
        User.findByName('Jane Doe', (error, users) => {
            expect(users[0].name).toBe('Jane Doe');
            done();
        });
    });
    
    it('should addEmail', (done) => {
        User.addEmail('jane.doe@jd.com', (error, email) => {
            expect(email).toBe('Jane Doe');
            done();
        });
    });
});

To learn more about mocking database functions, please read this article.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike. In this article, we reviewed how testing tends to be more of art, than science.

We also stressed the fact that, like in any art, practice makes perfect ~ testing models is challenging especially when a read/write to an actual database is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Asynchronous computation model makes nodejs flexible to perform heavy computations while keeping a relatively lower memory footprint. The stream API is one of those computation models, this article explores how to approach testing it.

In this article we will talk about:

  • Difference between Readable/Writable and Duplex streams
  • Testing Writable stream
  • Testing Readable stream
  • Testing Duplex or Transformer streams

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//Read + Transform +Write Stream processing example
var gzip = require('zlib').createGzip(),
    route = require('expressjs').Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
    let rstream = fs.createReadStream('./several-tb-of-songs.json'); 
    rstream.
        .pipe(new MetadataStreamTransformer())
        .pipe(gzip)
        .pipe(res);
    // forwaring the error to next handler     
    rstream.on('error', error => next(error, null));
});

//Transformer Stream example
const inherit = require('util').inherits,
    Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    // re-enforces object mode chunks
    this.options = Object.assign({}, options, {objectMode: true});
    Transform.call(this, this.options);
}

inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};

The example above provides a clear picture of the context in which Readable, Writable, and Duplex(Transform) streams can be used.

What can possibly go wrong?

Streams are particularly hard to test because of their asynchronous nature. That is not an exception for I/O on the filesystem or third-party endpoints. It is easy to fall into the integration testing trap when testing nodejs streams.

Among other things, the following are challenges we may expect when (unit) test streams:

  • Identify areas where it makes sense to stub
  • Choosing the right mock object output to feed into stubs
  • Mock streams read/transform/write operations

There is an article dedicated to stubbing stream functions. Mocking in our case will not go into details about the stubbing parts in the current text.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework. They are not a suggestion, rather the ones that made sense to complete this article:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. mocha was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete.
  • The stack mocha, chai, and sinon (assertion and test doubles libraries) worth a shot.
  • node-mocks-http framework for mocking HTTP Request/Response objects.
  • Code under test is instrumented to make test progress possible. Test coverage reporting we adopted, also widely adopted by the mocha community, is istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses the nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

We assume we approach testing of fairly large nodejs application from a real-world perspective, and with refactoring in mind. The good way to think about large scale is to focus on smaller things and how they integrate(expand) with the rest of the application.

The philosophy about test-driven development is to write failing tests, followed by code that resolves the failing use cases, refactor rinse and repeat. Most real-world, writing tests may start at any given moment depending on multiple variables one of which being the pressure and timeline of the project at hand.

It is not a new concept for some tests being written after the fact (characterization tests). Another case is when dealing with legacy code, or simply ill-tested code base. That is the case we are dealing with in our code sample use case.

The first thing is rather reading the code and identify areas of improvement before we start writing the code. And the clear improvement opportunity is to eject the function getter() out of the router. Our new construct looks as the following: route.get('/songs', getter); which allows to test getter() in isolation.

Our skeleton looks a bit as in the following lines.

describe('getter()', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });
    //...
});

Let's examine the case where the stream is actually going to fail.

Note that we lack a way to get the handle on the stream object, as the handler does not return any object to tap into. Luckily, the response and request objects are both instances of streams. So a good mocking can come to our rescue.


//...
let eventEmitter = require('events').EventEmitter,
  httpMock = require('node-mocks-http'),

//...
it('fails when no songs are found', done => {
    var self = this; 
    this.next = sinon.spy();
    this.req = httpMock.createRequest({method, url, body})
    this.res = httpMock.createResponse({eventEmitter: eventEmitter})
    
    getter(this.req, this.res, this.next);
    this.res.on('error', function(error){
        assert(self.next.called, 'next() has been called');
        done(error);
    });
});

Mocking both request and response objects in our context makes more sense. Likewise, we will mock response cases of success, the reader stream's fs.createReadStream() has to be stubbed and make it eject a stream of fake content. this time, this.res.on('end') will be used to make assertions.

Conclusion

Automated testing streams are quite intimidating for newbies and veterans alike. There are multiple enough use cases in the book to get you past that mark.

In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing streams is particularly challenging especially when a read/write is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #tdd #streams #nodejs #mocking

It is quite a challenge to open up the local environment for supervised access to the world, just as publicly accessible web servers do. Two scenarios that can make this statement a bit clearer are when wanting to demo to a remote customer, or want to allow a remote public server to send a WebHook payload to localhost. This communication that is otherwise seen as impossible, is made possible with a technique commonly referred to as tunneling. And that is going to be the subject of this blog post.

This blog will have additional information in near future.

In this article we will talk about:

  • How to debug a remote server on a local development environment
  • How to share local development work to the world using ssh, vscode, and or online services
  • How to receive a WebHook payload from a remote server to a local instance for testing purposes
  • How to expose nodejs runtime to the browser for debugging purposes.
  • How to prevent local development environment from hitting remote services using proxy setup

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Tunneling

Tunneling does port forwarding from a client machine to a server. The client and server here are seen from the perspective of the machine that brokerages data. Tunneling has three faces: local port forwarding, remote port forwarding, and dynamic forwarding

Local port forwarding

A clear scenario where we need a local port forwarding is being on a private network that restricts access to service: restricted.service.com. A tunnel goes through another service(jump server) that isn't restricted: safe.com which forwards our local port 3000 to restricted.service.com:8080. restricted.service.com doesn't only need to be a service restricted on a private network, It can also be for instance a production database server(postgre sql/mongodb or mysql) provisioned via a reverse proxy-server or behind a firewall that we want to access to from our local instance, a monitoring tool such as uptime or monit that are not accessible to the public. And the list can go on.

# Local port forwarding 
# now restricted.service.com is available on http://localhost:3000
$ ssh -nNT -L 3000:restricted.service.com:8080 usr@safe.com 

# to access remote psql service using $ psql -h localhost -p 5431, do
$ ssh -nNT -L 5431:localhost:5432 usr@safe.com  

# to access remote mongod service using  $ mongod -h localhost -p 27016, do            
$ ssh -nNT -L 27016:localhost:27017 usr@safe.com             

How to read ssh -nNT -L 27016:localhost:27017 usr@safe.com commands in plain English: forward our local port 27016 to remote server's localhost:27017 which is totally fine once ssh'd into safe.com. In this case, the localhost:27017 is relative to safe.com's localhost. Our localhost is implicit in our command, as the extended version would look like ssh -nNT -L localhost:27016:localhost:27017 usr@safe.com. In this case, however, ssh -nNT -L 3000:restricted.service.com:8080 usr@safe.com we are using our safe server as a jump server to connect to a remote service.

Remote port forwarding

Another clear scenario where we need remote port forwarding is being able to make our local server, which is most definitely not available to the public, become somehow available to the public. The use case that may make this concept familiar, is when we want to demo something on a local private instance to a customer or a friend. Another scenario would be redirecting traffic from one (public) server to another (local) server — for instance, redirecting production read/writes to a local instance.

vi /etc/ssh/sshd_config
# edit: GatewayPorts yes
sudo service ssh restart

# now safe.com:8080 exposes our http://localhost:3000 to the public
$ ssh -nNT -R 8080:localhost:3000 usr@safe.com 

Can it be possible to use ssh tunnel as a blue/green deployment switch.

Writing to remote server from a locally sourced data

This technique can be instrumental in two use cases:

The case when data have to be copied over to a remote server, for instance, deployments using FTP.

# using csp to download file to from remote server to local machine
$ scp username@url|ip:/path/to/file.tar /tmp
# using ssh keys 
$ scp -i ~/.ssh/id_rsa.pub username@url|ip:/path/to/file.tar /tmp

The case where a long-living session on a remote server is necessary to perform series of tasks. For instance, monitoring a server for an extended period of time for troubleshooting reasons.

$ ssh [username]@v4.or.v6.ip 
$ [passwd]
# curl download from github 
$ curl -H "Authorization: token xxxx" -L -o semVer.tar.gz https://github.com/.../semVer.tar.gz
# untar + configure + install 
# it may go on for X hours

scp command works both ways: to upload files to the remote server and to download files from a remote server

In both cases, to establish a remote session, an ssh tunnel is necessary as presented in two previous examples.

Reading remote server data to a local instance

This technique forwards a remote port to a local instance. The use cases it may be useful are, but not limited to, the following use cases

tail (or tee) production server logs to a local development environment for log analysis purposes

$ ssh -t server tail -f /var/log/remote.log >> /var/log/local.log
$ ssh -t server "tail -f /var/log/remote.log" | tee -a /var/log/local.log

Debug deployment environment code on a local instance

The techniques explained here can well be used in a container environment as well.

Forwarding WebHooks to a local instance

When integrating with asynchronous services such as a payment, or receiving notifications about shipping status, WebHooks play a big role to make this communication happen.

The problem is to be able to validate if the code that handles WebHook payload is working as advertised. One of the ways to test before the sh*t hits the fan is to receive an actual WebHook(or testing WebHook) to a local development instance. And that is not as easy as it sounds.

This section showcase how to achieve that in a few steps, using both hacks and third-party tools.

Conclusion

In this article, we revisited how to open ports so that remote servers and local servers can communicate in a secure way, using tunneling techniques. We also revisited a couple of applications where tunneling might be useful be in the development, demo, or support of a nodejs application in production. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tunneling #nodejs #ssh #vscode #webhooks

Enter your email to subscribe to updates.