Simple Engineering

discuss

Testing authenticated routes sound intimidating, but the trick is simple to get it right. The right combination of mocking a session object and stubbing of the authentication middleware. This article will revisit these two key ingredients to make tests work.

In this article we will talk about:

  • Avoiding integration test trap on authenticated routes
  • Stubbing authentication middleware for faster tests
  • Mocking authentication protected routes' session data

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


// Authentication Middleware in middlwares/authenticated.js 
module.exports = function(req, res, next){
    let token = req.headers.authorization;
    let payload = jwt.decode(token, config.ssecret);
    if(!validate(payload)) return next(new Error('session expired'));
    req.user = payload.sub;//adding 
    return next();
};

//Session Object in settings/controller/get-profile  
module.exports = function(req, res, next){
    let user = req.session.user;
    UserModel.findById(user._id, (error, user) => {
        if(error) return next(error, null);
        return req.status(200).json(user); 
    });     
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getProfile = require('./settings/get-profile');
router.get('/profile/:id', authenticated, getProfile);
module.exports = router;

What can possibly go wrong?

There is a clear need to mimic the real authentication when testing expressjs authenticated routes and sometimes this need leads to an integration testing trap.

Following are other challenges we may expect along the way:

  • Avoid testing underlying libraries that provide authentication features
  • Simulate authenticated session data
  • Mock requests behind protected third-party routes, such as Payment Gateways, etc.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework. They are not a suggestion, rather the ones that made sense to complete this article:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. mocha was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete.
  • supertest framework for mocking RESTful APIs and nock for intercepting and mocking third-party HTTP requests. supertest is written on top of superagent, so we get both testing toolkits.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

The keyword in mocking a session object lies in this line, found in the example above: let user = req.session.user;. With that knowledge,


describe('getPrifile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { user: { /*...*/ } };//mocking session object
    req = { params: {id: 1234}, session: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });

});

On the other hand, since authenticated() resides on a library, it can simply be stubbed as any other function, the time comes to test the whole route: let authenticated = sinon.spy();.

Conclusion

In this article, we reviewed how testing tends to be more of an art, than science. We also stressed the fact that, like in any art, practice makes perfect.

One use case of tapping into middleware re-usability/composability and testability is the authentication middleware herein presented. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

The middleware is one of the components that improve the composability of expressjs router. This blog post approaches middleware testing from a real-world perspective. The use case is a CORS since found in almost all expressjs enabled applications.

In this article we will talk about:

  • How to mock Request/Response Objects
  • Spying if certain calls have been called
  • Make sure the requests don't leave the local machine.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The CORS middleware is one of the most used middleware in the nodejs community.

module.exports = function cors(req, res, next) {
    res.set('Access-Control-Allow-Credentials', true);
    res.set('Access-Control-Allow-Origin', '*');
    res.set('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
    res.set('Access-Control-Allow-Headers', 'X-CSRF-Token, X-CSRF-Strategy, X-Requested-With, Accept, Authorization, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version');
    res.set('Content-Type', 'application/json');
    res.set('Access-Control-Allow-Max-Age', 3600);

    return req && req.method === 'OPTIONS' ? res.send(200) : next();
};

Example: CORS middleware in lib/middleware/cors.js

Code sample is modeled from: Unit Testing Controllers the Easy Way in Express 4

What can possibly go wrong?

As is the case for routers, the following points may be other challenges when unit testing expressjs middleware:

  • Mock database read/write operations for a middleware that reads/writes from/to a database
  • Mocking read/write from/to third-party services to avoid integration testing trap

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our tiny Choosing the right tools framework, the following tools make sense in a context of this blog, when testing expressjs routes middleware:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for examples sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.

The testing stack mocha, chai and sinon worths a shot for most use cases.

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR $ nyc test mocha -- --color --reporter mocha-lcov-reporter specs

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Show me the tests

Have you ever wondered where to start from, when refactoring a code block? That is a common source of frustration and the bad decision-making that generally follows. When paying off technical debt, small bad moves can build up into catastrophe, such as having unexpected downtime with little to no failure traceability.

This blog post approaches testing of fairly large nodejs application from a real-world perspective and with refactoring in mind.

The mainstream philosophy about automated testing is to write failing tests, followed by code that resolves the failing use cases. In the real world, a writing test should start as it should follow writing code. A particular case is when dealing with untested code.

var sinon = require('sinon'), 
    chai = require('chai'), 
    expect = chai.expect, 
    cors = require('./middleware').cors, 
    req, 
    res, 
    next;
   
describe("cors()", function() {
    before(function(){
        req = {}, 
        res = { send: sinon.spy()}, 
        next = sinon.spy();
    });

    it("should skip preflight requests", function() {
        req = {method: 'OPTIONS'};//preflight requests have method === options
        cors(req, res, next);
        expect(res.send.calledOnce).to.equal(true); 
        res.send.restore();
    });     

    it('should decorate requests with CORS permissions', function() => {
        cors(req, res, next);
        expect(next.calledOnce).to.equal(true); 
        next.restore();
    });
});

Example:

Special Use Case: How to mock a response that will be used with a Streaming Source.

It worths mentioning that mocking a request object is not rocket science. An empty object, with the right methods we use in a given test, is sufficient enough to assert whether areas of our interest are covered.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike.

In this article, we reviewed how testing tends to be more of an art, than science. We also stressed the fact that, like in any art, practice makes perfect. One way this idea may be reflected in real life is by testing middleware as an isolated reusable, composable component that the middleware constitutes. Writing a good meaningful testing message is pure art.

There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

This blog post approaches testing fairly large nodejs application from a real-world perspective and with refactoring in mind. The use cases address advanced concepts that testing expressjs routes are.

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike.

In this article we will talk about:

  • Healthy test coverage of routes
  • Modularization of routes for testability
  • Mock Route's Request/Response Objects when necessary
  • Mock requests to third-party endpoints such as Payment Gateway.

Additional challenges while testing expressjs Routes*

  • Test code, not the output
  • Mock requests to Payment Gateway, etc.
  • Mock database read/write operations
  • Be able to cover exceptions and missing data structures

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getProfile = require('./settings/get-profile');
router.get('/profile/:id', authenticated, getProfile);
module.exports = router;

Example:

What can possibly go wrong?

When (unit) test expressjs routes the following challenges may arise:

  • Drawing a line between tests that fall into the unit testing category versus those tests that fall into the integration testing camp.
  • Being mindful that authenticated routes can appeal in the picture
  • Mock database read/write operations, or other layers(controller/service) that are not critical (core) to validation of the route's expectations

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes:

  • We can technically have auto-reload or hot-reload using: pm2, nodemon or forever. We recommend supervisor.
  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We recommend mocha. The stack mocha, chai and sinon can be worth it as well.
  • supertest framework for mocking Restful APIs and nock for mocking HTTP.
  • Code under test is instrumented, but default reporting tools do not always suit our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the test

If you haven't already, read the “How to write test cases developers will love”

The mainstream philosophy about automated testing is to write failing tests, followed by code that resolves the failing use cases. This is not always the case, especially when dealing with legacy code, or poorly tested code. The less puritan approach is at least tests when the code is still fresh in memory.

In this article, we assume the reader knows how to mock routes, otherwise there are articles that cover the basics of mocking routes' request/response objects and how to mock database read/write functions in this blog.

The common source of frustration and sometimes bad decision-making that follows is when not able to define boundaries: when to start refactoring, and when to stop.

Testing a route handler in isolation looks like testing any function. In our case, there should be a mocking operation of the User.findById() function, that is intended to be used with the request.

For more on how to mock mongoose read/write function.

describe('getProfile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });
  
  it('fails when no profile is found', () => {
    getRequest(req, res, next);
    expect(next).toHaveBeenCalledWith([error, null]);
  });

});

Please refer to this article to learn more about how to mocking mongoose read/write functions.

Testing an integral route falls into the integration testing category. Whether we connect to a live database or use a live server route is up to the programmer, but the best(fast/efficient) approach is to mock out those two expensive parts as well.

var router = require('./profile/router'),
    request = require('./support/http');
describe('/profile/:id', () => {
  it('returns a profile', done => {
    request(router)
      .get('/profile/12')
      .expect(200, done);
  });

  it('fails when no profile is found', done => {
    request(router)
      .get('/profile/NONEXISTENT')
      .expect(500, done);
  });
});

request = require('./support/http') is the utility that may use either of supertest or dupertest provide a request.

Conclusion

When paying off technical debt, small bad moves can build up into catastrophe, such as downtime with little failure traceability. Good test coverage increase confidence when refactoring, refines boundaries, while at the same time reducing the introduction of new bugs in the codebase.

In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing routes, just like testing controllers, can be challenging when interacting with external systems is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #expressjs #routes #discuss

The majority of web applications may not need a background job, but for those that do, experience some level of shadow around testing/debugging and discovering issues before it becomes too late. This article contributes towards increasing testability and saving time for late debugging.

As it was in other blogs that preceded this one, we will explore some of the ways to make sure most of the parts are accessible for testability.

In this article we will talk about:

  • Aligning background jobs with unit test best practices
  • Mocking session data for services that need authentication
  • Mocking third party systems when testing a background job

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code


//Job Definition in jobs/email.js
var email = require('some-lib-to-send-emails'); 
var User = require('./models/user.js');

module.exports = function(agenda) {
  
  agenda.define('registration email', function(job, done) {
    User.findById(job.attrs.data.userId, function(err, user) {
       if(err) return done(err);
       	var message = ['Thanks for registering ', user.name, 'more content'].join('');
      	return email(user.email, message, done);
     });
  });

  agenda.define('reset password', function(job, done) {/* ... more code*/});
  // More email related jobs
};
//triggering in route.js
//lib/controllers/user-controller.js
var app = express(),
    User = require('../models/user-model'),
    agenda = require('../worker.js');

app.post('/users', function(req, res, next) {
  var user = new User(req.body);
  user.save(function(err) {
    if(err) return next(err);
    //@todo - Schedule an email to be sent before expiration time
    //@todo - Schedule an email to be sent 24 hours
    agenda.now('registration email', { userId: user.primary() });
     return res.status(201).json(user);
  });
});

Example:

What can possibly go wrong?

When trying to figure out how to approach testing delayed asynchronous nodejs background jobs, the following points may be a challenge:

It is easy to fall into the integration testing trap when testing nodejs background jobs. Not only those jobs are asynchronous, but also are scheduled to run at a particular time. The following are additional challenges when testing nodejs background jobs in a Unit Test context.

  • Testing asynchronous jobs in a synchronous context ~ time-bound constraints may not be predictable, therefore not covered with our tests
  • Identifying and choosing the right break-point to do the mocking/stubbing
  • Mock third-party services such as Payment Gateway, etc.
  • Mock database read/write operations
  • Sticking to unit testing good practices

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing nodejs background, or scheduled, tasks:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We recommend mocha. The stack mocha, chai and sinon can be worth it as well.
  • Code under test is instrumented, but default reporting tools do not always suit our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

What should I be testing

If you haven't already, read the “How to write test cases developers will love”

Istanbul generates reports as tests progress.

# In package.json at "test" - add next line
$ istanbul test mocha -- --color --reporter mocha-lcov-reporter specs
# Then run the tests using 
$ npm test --coverage 

Example:

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

It is a little bit challenging to test a function that is not accessible outside its definition closure. However, making the function definition accessible from outside the library makes it possible to test the function in isolation.


describe('Jobs', () => {

  it('should define registration email', done => {
   registrationEmailTask(params, (attrs) => {
     expect(User.findById).toHaveBeenCalled(); 
     expect(email).toHaveBeenCalled();
     done();
   });
   
  });

});

Following the same footsteps, we can test the reset password task. To learn more about mocking database functions, please read this article.

There is a chapter on testing background jobs in the book, for more techniques to mock, modularize and test background jobs.

The lens to test the application from counts more at this level. A misstep makes us fall into integration testing territory, un-willingly.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike. In this article, we reviewed how testing tends to be more of art, than science.

We also stressed the fact that, like in any art, practice makes perfect ~ testing background jobs constitutes some of the challenging tasks from the asynchronous nature of the jobs. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

There is a striking similarity between testing expressjs route handlers and controllers. That similarity and test exploration is the subject matter of this article.

Few resources about testing in general address advanced concepts such as how to isolate components for better composability and healthy test coverage. One of the components that improve composability, at least in layered nodejs applications, is the controller.

In this article we will talk about:

  • Mocking controller Request/Response objects
  • Providing healthy test coverage to controllers
  • Avoiding controller integration test trap

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//Session Object in settings/controller/get-profile  
module.exports = function getPrifile(req, res, next){
    let user = req.session.user;
    UserModel.findById(user._id, (error, user) => {
        if(error) return next(error, null);
        return req.status(200).json(user); 
    });     
};

This code is a valid controller and a valid handler. There is a caveat in design that makes the case of introducing a service layer in the applications.

What can possibly go wrong?

When trying to figure out how to approach testing expressjs controllers in a Unit Test context, the following points may be a challenge:

  • How to refactor unit test at the time controller layer gets introduced, instead of route handlers.
  • Mock database read/write operations, or service layer if any, that are not core/critical to validation of the controller's expectations
  • Test-driven refactoring of the controller to adopt a service layer, to abstract the database and third-party services.

The following sections will explore more on making points stated above work.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework, we adopted the following tools (that made sense to complete current article) on testing expressjs controllers:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We chose mocha.
  • The stack mocha, chai and sinon (assertion and test doubles libraries) worth a shot.
  • supertest framework for mocking Restful APIs and nock for mocking HTTP.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

It is not always obvious why to have a controller layer in a nodejs application. When the controller is already part of the application, it may well be problematic to test it, in a way that provides value to the application as a whole, without sacrificing “time to market”.

describe('getPrifile', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });

  it('returns a profile', () => {
    getRequest(req, res, next);
    expect(res.status().json).toHaveBeenCalled();
  });
  
  it('fails when no profile is found', () => {
    getRequest(req, res, next);
    expect(next).toHaveBeenCalledWith([error, null]);
  });

});

The integration testing of the request may look a bit like in the following paragraph:

var router = require('./profile/router'),
    request = require('./support/http');
describe('/profile/:id', () => {
  it('returns a profile', done => {
    request(router)
      .get('/profile/12')
      .expect(200, done);
  });

  it('fails when no profile is found', done => {
    request(router)
      .get('/profile/NONEXISTENT')
      .expect(500, done);
  });
});

request = require('./support/http') is the utility that may use either of supertest or dupertest provide a request.

Once the above process is refined, more complex use cases can be sliced into more manageable but testable cases. The following as some of the complex use cases we can think of for now:

module.exports = function(req, res, next){
  User.findById(req.user, function(error, next){
    if(error) return next(error); 
    new Messenger(options).send().then(function(response){
      redisClient.publish(Messenger.SYSTEM_EVENT, payload));
      //schedule a delayed job 
      return res.status(200).json({message: 'Some Message'});
    });
  });
};

It may be hard to mock one single use case, with callbacks. That is where slicing, and grouping libraries into reusable services can come in handy. Once a library has a corresponding wrapper service, it becomes easy to mock the service as we wish.

module.exports = function(req, res, next){
  UserService.findById(req.user)
    .then(new Messenger(options).send())
    .then(new RedisService(redisClient).publish(Messenger.SYSTEM_EVENT, payload))
    .then(function(response){ return res.status(200).json(message);})
    .catch(function(error){return next(error);});
};

Alternatively, Using an in-memory database can alleviate the task, to mock the whole database. The other more viable way to go is to restructure the application and add a service layer. The service layer makes it possible to test all these features in isolation.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike. In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing controllers, just like testing routers, can be challenging especially when interacting with external systems is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Testing the model layer introduces a set of challenges relating to reading and writing to a database. This article clears some of the challenges to avoid side effects and makes it possible to test the model layer in isolation.

One of the components that lay the groundwork for data-driven layered applications is the model layer. However, resources about testing, in general, do not address advanced concepts such as how to isolate components for better composability and healthy test coverage.

In this article we will talk about:

  • Basics when testing models
  • Best practices around model layer unit testing.
  • Mocking read/write and third party services to avoid side effects.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//in lib/models/user.js
var UserSchema = new mongoose.Schema({name: String});
UserScheme.statics.findByName(function(name, next){
    //gives: access to Compiled Model
    return this.where({'name': name}).exec(next);
});

UserSchema.methods.addEmail(function(email, next){
    //works: with retires un-compiled model
    return this.model('User').find({ type: this.type }, cb);
});

//exporting the model 
module.exports = mongoose.model('User', UserSchema);        

//anywhere else in UserModel is used 
User.findById(id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
});

new User(options).save(function(error, user){
  if(error) return next(error);
  return next(null, user); 
});

Example: mongoose Model definition example in model/user.js

What can possibly go wrong?

When trying to figure out how to approach mocking chained model read/write functions, the following points may be a challenge:

  • Stub database read/write operations ~ finding a balance between what we want to test, versus what we want to mock
  • Mock database read/write operation outputs ~ output may not reflect reality after schema(table definition) change.
  • Cover exceptions and missing data structures ~ databases are complex systems, and we may not cover the majority of scenarios where errors/exceptions may occur
  • Avoid integration testing traps ~ the complexity of database systems makes it hard to stick to the plan and write tests that validate our actual implementation

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework, we adopted the following tools, when testing mongoose models:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. We recommend mocha. The stack mocha, chai and sinon can be worth it as well.
  • Code under test is instrumented, but default reporting tools do not always suits our every project's needs. For test coverage reporting we recommend istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses nyc code name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example:

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

This blog post approaches testing of fairly large nodejs application from a real-world perspective and with refactoring in mind.

sinon stubs to simulate a response from Mongo::UserSchema::save() function, its equivalents.


describe('User', () => {
    beforeEach(() => {
        ModelSaveStub = sinon.stub(User.prototype, 'save', cb);
        ModelFindStub = sinon.stub(User, 'find', cb);
        ModelFindByIdStub = sinon.stub(User, 'findById', cb);
    });

    afterEach(() => { 
        //... 
    });
    
    it('should findByName', (done) => {
        User.findByName('Jane Doe', (error, users) => {
            expect(users[0].name).toBe('Jane Doe');
            done();
        });
    });
    
    it('should addEmail', (done) => {
        User.addEmail('jane.doe@jd.com', (error, email) => {
            expect(email).toBe('Jane Doe');
            done();
        });
    });
});

To learn more about mocking database functions, please read this article.

Conclusion

Automated testing of any JavaScript project is quite intimidating for newbies and veterans alike. In this article, we reviewed how testing tends to be more of art, than science.

We also stressed the fact that, like in any art, practice makes perfect ~ testing models is challenging especially when a read/write to an actual database is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Some of nodejs projects rely on expressjs for routing. There is a realization that past a certain threshold, some request handlers start looking like copycats. Extreme cases of such instances become a nightmare to debug, hinder scalability. The increase in code reusability, modularity, improves overall testability — along the way scalability and user experience. The question we have to ask is How do we get there?.

This blog article will explore some of the ways to achieve that. In expressjs routes context, we will shift focus on making sure most of the parts are accessible and testable.

In this article we will talk about:

  • The need to modularize expressjs routes
  • How to modularize expressjs routes for reusability
  • How to modularize expressjs routes for testability
  • The need for a manifest route modularization strategy
  • How to modularize expressjs routes for composability
  • How to modularize expressjs route handlers for reusability
  • How to modularize expressjs route handlers for performance
  • How to modularize expressjs route handlers for composability
  • The need to have route handlers as controllers
  • How to specialize routes handlers as controllers

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

While following a simple principle “make it work”, you realize that route lines of code(LoC) grows linearly(or leaning towards exponential) as feature requests increase. All this growth can happen inside one file, or on a single function. Assuming all our models are NOT in the same files as our routes, the following source code may be available to us in the early days of a project:

var User = require('./models').User; 
/** code that initializes everything then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

/**
 * More code, more time, more developers 
 * Then you realize that you actually need:
 */ 
app.get('/admin/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

Example:

What can possibly go wrong?

When trying to figure out how to approach modularization of expressjs routes, the following points highlight some challenges:

  • Understanding where to start, and where to stop when modularizing routes
  • Making a choice between a layered architecture with or without controllers
  • Making a choice between a layered architecture with or without services

In the next sections, we will explore more on points made raised earlier.

The need to modularize expressjs routes

One heavily relied upon a feature in expressjs is its router. The routes tend to grow out of proportion and can be a source of trouble when the time comes to test, refactor or extend existing functionalities. One of the tools to make our job easy is to apply modularization techniques to expressjs routes.

How to modularize expressjs routes for reusability

There is only one route per application, so the notion of route re-usability may not be as evident as it should be in such a context.

However, when we look closer to the constructions of a handler, we may get a sense of how many times an actual route's work can be spread across multiple instances and use cases. When we look at the path itself, it is possible to find matching suffixes.

Suffixes indicate that multiple routes may indeed be using one handler. To keep it simple, different contexts, same actions. /admin/add/user, /profile/add/user, /school/:id/add/user etc. All of the roots, or prefix of /add/user are contexts in which some action is taking place.

Deep-down the end result is a user being added. There is a probability that the user is going to be added to the same database table or document.

//in one file 
let router = require('express').Router();
  router.post('/add/user', addUser);
module.exports = router;

//later in another file 
let router = require('express').Router(), 
  add = require('/one/file');
  
  router.use('/admin', add);
  router.use('/profile', add);
  router.use('/school/:id', add);
module.exports = router;

The modularization of routes, for that matter — route handlers, should not stop at their ability to be reusable.

Modularization can guarantee the stability of routes and their handlers. To put things in perspective, for two distinct routes that share the same route handler, a change in parameter naming should not affect other routes. Likewise, a change in route handler affects routes using the same handler but does not necessarily affect any route configuration.

Like in other use cases, modularizing an expressjs route consists of two major changes. The first step is to identify, name and eject route handlers. This step may be a bit challenging when the middleware is involved. The second and last step is to move and group similar handlers under the same library. The said library can be exposed to the public using the index trick we discussed in other blog posts.

How to modularize expressjs routes for testability

The challenge when mocking an expressjs route is losing route handler implementation in the process.

That may not be an issue when executing integration or end-to-end testing tasks. Taking into consideration that individual handlers can be tested in isolation, we get the benefits of reducing the number of tests and mocking work required per route.

The second challenge is to find a sweet spot between integration testing, unit testing and apply both ideas to the route and route handler, per test case needs.

Loading any library in unit tests is expensive, let alone to load entire expressjs in every unit test. To avoid this, either loading express from a mockable library or injecting expressjs application as needed, maybe two healthy alternatives we have to look into so to speak.

The need of manifest route modularization strategy

There is a common pattern that reveals itself at the end of the modularization effort. Related Routes can be grouped into independent modules, to be reused independently on demand. To make this thought a reality, the manifest route technique attaches a route to a router and makes that router available and ready to be used by other routers, or by an expressjs application.

How to modularize expressjs routes for composability

There is a lot to unpack when dealing with the composability of expressjs routes. The takeout when composing routes is a route that should be defined in a way to can plugged on any router, and just work. Another example would be the ability to mount a server or an expressjs app instance to the route definition on the get-go and have an application that just works.

How to modularize expressjs route handlers for reusability

The reusability aspect of business comes in handy to help reduce instances of code duplication. One can argue that this also helps with performance, as well as better test coverage. The advanced use case of higher re-usability ends in a controller or well-organized module of handlers.

How to modularize expressjs route handlers for performance

The nodejs module loader is expensive. For fairness, reading a file is expensive. The node_modules is notorious for the number of directories and files associated with it. It is by no surprise that reading and loading all those files may be a performance bottleneck. The fewer the files we read from the disk, the better. The following modularization for composability is a living example of how modularization can be used alongside performance improvements.

How to modularize expressjs route handlers for composability

Be in this blog post, as in the ones that came ahead of it, we strive to make the application more reusable while at the same time reducing the time it takes to load the application for use or testing purposes. One way of reducing the number of imports is to leverage thunks or injections.

The need to have route handlers as controllers

If we look up close to route handlers are tightly coupled to a route. Previous techniques broke the coupling and moved individual route handlers in their own modules. Another up-close look, reveals two key points: first some route handlers are copycats, second some route handlers are related at the point they may constitute an independent entity on their own. If we group all handlers related to providing one feature, we completely land in the controller space.

How to specialize routes handlers as controllers

If there exist multiple ways to brew a beer, there should be multiple ways to clustering related handlers in the same module or component! Ok, let's admit that that example does not have any sound logic, but You see the point.

One of the ways to group related handlers is to start grouping by feature. Then if for some reason multiple features happen to use similar(or copycat handlers), choosing an advanced level of abstraction becomes ideal. When we have an equivalent of a base controller, that base controller can move to a common library. The name of the common library can be for instance: /core, /common, or even /lib. We can get creative here.

Modularization of Express routes

The easy way to mitigate that is by grouping functions that are similar into the same file. Since the service layer is sometimes not so relevant, we can group functions into a controller.

//in controller/user.js
module.exports = function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error || !user){
      return next(error);#return right away
    }
    return res.status(200).json(user);
  });
};

//in routes/user.js
var getUser = require('controller/user);
var router = require('express').Router();
router.get('users/:id', getUser);
router.get('admin/:id', getUser);
//exporting the 
module.exports = app;

Example:

Both controller/user.js and two routes can be tested in isolation.

Conclusion

The complexity that comes with working on large-scale nodejs/expressjs applications reduces significantly when the application is in fact well modularized. Modularization is a key strategy in making expressjs routes more re-usable, composable, and stable as the rest of the system evolves. Modularization brings not only elegance to the routes but also reduces the possibility of route redundancy, as well as improved testability.

In this article, we revisited techniques that improve expressjs routes elegance, their testability, and re-usability. We focused more on layering the route into routes and controllers, as well as applying modularization techniques based on module.exports and index files. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Mocking and stubbing walk hand in hand. stubbing redis pub/sub, a datastore widely adopted in nodejs ecosystem, can be a setback when testing WebSocket endpoints. This article brings clarity, and a path forward, to it.

In this article we will talk about:

  • Stubbing redis clients
  • Replacing a redis with a drop in replacement.
  • How to avoid spin-up a redis server.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

module.exports = function(req, res, next){
  User.findById(req.user, (error, next) => {
    if(error) return next(error); 
    new Messenger(options).send().then((response) => {
      redisClient.publish(Messenger.SYSTEM_EVENT, payload));
      //schedule a delayed job 
      return res.status(200).json({message: 'Some Message'});
    });
  });
};

//service based equivalent using a service layer
module.exports = function(req, res, next){
  UserService.findById(req.user)
    .then(new Messenger(options).send())
    .then(new RedisService(redisClient).publish(Messenger.SYSTEM_EVENT, payload))
    .then(response => res.status(200).json(message);})
    .catch(error => next(error));
};

The use of arrow functions instead of function keyword serves to shorten the code. It is possible to replace all arrow functions with the function keywords, for readability.

What can possibly go wrong?

The following points may be a challenge to mock datastore access:

  • Same level of challenge as when mocking database access functions
  • Asynchronous nature of pub/sub clients, characteristic to queue processing systems
  • When the application is using redis (local or remote)
  • Running tests without spinning up a redis server

The following sections will explore more on making points stated above work.

Show me the tests

There is more than one way to go with mocking. I have to preview 3 libraries and choose one the fits better my needs.

Some of libraries are we can tap into to make mocking possible are : rewire, fakeredis, proxywire and sinon.

Mocking redis using rewire

var Rewire = require('rewire');
//module to mock redisClient from 
var controller = Rewire("/path/to/controller.js");
//the mock object + stubs
var redisMock = {
  //get|pub|sub are stubs that can return promise|or do other things
  get: sinon.spy(function(options){return "someValue";});
  pub: sinon.spy(function(options){return "someValue";});
sub: sinon.spy(function(options){return "someValue";});
};
//replacing --- `redis` client methods :::: this does not prevent spinup a new `redis` server
controller.__set__('redisClient', redisMock);

Example:

Mocking redis using fakeredis. fakeredis provides an thrown in replacement and functionalities for redis's createClient() function.

var redis = require("redis");    
var fakeredis = require('fakeredis'); 
var sinon = require('sinon'); 
var assert = require('chai').assert; 

var users, client; 
describe('redis', function(){
  before(function(){
    sinon.stub(redis, 'createClient', , fakeredis.createClient);
    client = redis.createClient(); //or anywhere in code it can be initialized
  });

  after(function(done){
    client.flushdb(function(error){
      redis.createClient.restore();
      done();
    });
  });
});

Example:

Two of the alternatives whose examples are not figuring in this article are mocking redis usingredis-mock and proxyquire.

The goal of the redis-mock project is to create a feature-complete mock of [`redisnode](https://github.com/mranney/node_redis), so that it may be used interchangeably when writing unit tests for code that depends onredis`_

Conclusion

In this article, we revisited strategies to mock redis access methods and replace response objects with mock data.

Testing in parallel can stress the redis server. Mocking redis clients makes tests faster, reduces friction on the network, and prevent stressing redis server especially when shared with other production applications.

There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Mocking and stubbing walk hand in hand. stubbing redis pub/sub, a datastore widely adopted in nodejs ecosystem, can be a setback when testing WebSocket endpoints. This article brings clarity, and a path forward, to it.

In this article we will talk about:

  • Stubbing redis clients
  • Replacing a redis with a drop in replacement.
  • How to avoid spin-up a redis server.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

module.exports = function(req, res, next){
  User.findById(req.user, (error, next) => {
    if(error) return next(error); 
    new Messenger(options).send().then((response) => {
      redisClient.publish(Messenger.SYSTEM_EVENT, payload));
      //schedule a delayed job 
      return res.status(200).json({message: 'Some Message'});
    });
  });
};

//service based equivalent using a service layer
module.exports = function(req, res, next){
  UserService.findById(req.user)
    .then(new Messenger(options).send())
    .then(new RedisService(redisClient).publish(Messenger.SYSTEM_EVENT, payload))
    .then(response => res.status(200).json(message);})
    .catch(error => next(error));
};

The use of arrow functions instead of function keyword serves to shorten the code. It is possible to replace all arrow functions with the function keywords, for readability.

What can possibly go wrong?

The following points may be a challenge to mock datastore access:

  • Same level of challenge as when mocking database access functions
  • Asynchronous nature of pub/sub clients, characteristic to queue processing systems
  • When the application is using redis (local or remote)
  • Running tests without spinning up a redis server

The following sections will explore more on making points stated above work.

Show me the tests

There is more than one way to go with mocking. I have to preview 3 libraries and choose one the fits better my needs.

Some of libraries are we can tap into to make mocking possible are : rewire, fakeredis, proxywire and sinon.

Mocking redis using rewire

var Rewire = require('rewire');
//module to mock redisClient from 
var controller = Rewire("/path/to/controller.js");
//the mock object + stubs
var redisMock = {
  //get|pub|sub are stubs that can return promise|or do other things
  get: sinon.spy(function(options){return "someValue";});
  pub: sinon.spy(function(options){return "someValue";});
sub: sinon.spy(function(options){return "someValue";});
};
//replacing --- `redis` client methods :::: this does not prevent spinup a new `redis` server
controller.__set__('redisClient', redisMock);

Example:

Mocking redis using fakeredis. fakeredis provides an thrown in replacement and functionalities for redis's createClient() function.

var redis = require("redis");    
var fakeredis = require('fakeredis'); 
var sinon = require('sinon'); 
var assert = require('chai').assert; 

var users, client; 
describe('redis', function(){
  before(function(){
    sinon.stub(redis, 'createClient', , fakeredis.createClient);
    client = redis.createClient(); //or anywhere in code it can be initialized
  });

  after(function(done){
    client.flushdb(function(error){
      redis.createClient.restore();
      done();
    });
  });
});

Example:

Two of the alternatives whose examples are not figuring in this article are mocking redis usingredis-mock and proxyquire.

The goal of the redis-mock project is to create a feature-complete mock of [`redisnode](https://github.com/mranney/node_redis), so that it may be used interchangeably when writing unit tests for code that depends onredis`_

Conclusion

In this article, we revisited strategies to mock redis access methods and replace response objects with mock data.

Testing in parallel can stress the redis server. Mocking redis clients makes tests faster, reduces friction on the network, and prevent stressing redis server especially when shared with other production applications.

There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

Configuration is at forefront of any applications. This article discusses a couple strategies to go about nodejs configurations, and some tools available that can be leveraged to that end.

Techniques explained in this blog, are also available with more details in “Configurations” chapter of the “Testing nodejs Applications” book. You can grab a copy of this book on this link. Testing Nodejs Applications Book Cover

In this article you will learn about:

  • Differentiation of configuration layers
  • Overview of tools that help to manage configuration files
  • Overview of basic configurations for a production ready nodejs application
  • Better manage configuration files, storing and provisioning production secret keys
  • Monitoring, failover, server and SSL certificate tools
  • Reducing configuration code change when new versions of system applications are released

Layers of configuration of nodejs applications

Although this blog article provides overview of tools and configurations, it leaves modularization of configurations in a nodejs setting to another blog post: “Modularize nodejs configurations”.

From production readiness perspective, there are two distinctive layers of application configurations, or at least in context of this blog post.

The first layer consists of configurations required by the system that is going to be hosting nodejs application. Database server settings, monitoring tools, SSH keys, and other third party programs running on the hosting entity, are few examples that fall under this category. We will refer to these as system variables/settings.

The second layer consists of configuration that nodejs application needs to execute intrinsic business logic. They will be referred to as environment variables/settings. Third party issued secret keys or server port number configurations, fall under this category. Most of cases, you will find such configurations in static variables found in the application.

This blog will be about working with the first layer: system settings.

For disambiguation, the system is a computing entity composed of software(an operating system, etc) and hardware(virtual or physical).

Managing system configuration variables

Since environment variable layer of the configuration is technically embedded within the code that uses it by default, changes in configurations are in sync with the code, and vice-versa.

Unlike the environment variables, system variables are not managed the same way nodejs applications they run is managed. Just because our application's new version saw some changes in environment settings, does not mean that the nginx server has its own settings changed as well. From another perspective, just because nginx latest version saw some changes in its settings, does not necessarily mean that our nodejs application environment settings have to change as well.

The problem we constantly face, is to figure out how to manage changes in configurations as the code evolves, and as the underlying system software evolves.

Things become a bit complicated, at least to manage when third party software(database, monitoring tools) code change also involves configuration changes. We will have to be informed about changes at hand, which is not always evident as those changes are released at will of the vendors, and not necessarily communicated to us in realtime. Next, we will have to figure out where every single configuration is located on our system, the apply new modifications. Additional complexity comes in when new changes become incompatible with our current version of nodejs application code, or when rollbacks are not avoidable.

The nodejs application code is not always in sync with the system that hosts it. This is where configuration management(aka CM) tools shine. Passing around both system and environment configuration variable values is a risky business, security-wise. This is where configuration provisioning tools come in handy.

Provisioning secrets at deployment time

In teams that have CI/CD implemented, every programmer has ability to deploy latest code version to production. With great powers comes great responsibilities. Making sensitive data accessible to a larger audience, comes with an increased security risk – for instance leaking secret keys to the public.

The challenge relies on how to approach configuration data management, as a part of a software, giving access to developers ability to work with code, but limiting access to production configuration secrets.

The keyword is in provisioning production secrets at deployment time, and as a part of delivery step, and let any developer have own development secrets. This way, one compromised developer account cannot lead to organization wide data breach.

Example of tools that makes provisioning secrets possible: SecretHub, Kubernetes, HashiCorp Vault, etc.

Reducing configuration changes when new system applications rollout

The 12 App Factor suggests to manage configuration as code. That makes it fast to deploy application anywhere, with less code change upon receiving new code releases.

In most of applications that are not containerized, configurations can be stored on a file server, for example at /etc/config/[app-name]/config.ext. This works on a smaller scale. You will always realize that it becomes a problem to set up a new developer and production machines. But having such a convention in place, reduces the pain.

In case of managing multiple instances of same application, it is better to move this configuration inside the code, at least at build time, ideally at root: [app-root]/config/config.ext. At deployment time, there will be an additional symlinking step, to make sure the new deployment points to the right configuration files.

Configure nginx to serve nodejs application

nginx is a very good alternative to Apache server. Its non blocking, single threaded model makes it a perfect match to proxy nodejs applications. It is also possible to configure it as a load balancer.

The location of nginx configuration files depend on Operating System distribution, the application is hosted on. In our context, we assume that our operating system is Linux/Unix and nginx is installed and configured at /etc/nginx.

Some other possible places are /usr/local/nginx, /usr/local/etc/nginx or any other location depending on how the operating system manages its filesystem. The paths above are of course on Linux or Unix distributions.

We recommend reading “How to install nodejs and “How to install nginx for more in-depth information that may not be found in current blog post.

The magic happens at the upstream nodeapps section. This configuration plays the gateway role, and makes public a server that was otherwise private.


upstream nodeapps{
  # Directs to the process with least number of connections.
  least_conn;
  server 127.0.0.1:8080 max_fails=0 fail_timeout=10s;
  server localhost:8080 max_fails=0 fail_timeout=10s;
  keepalive 512;
}

server {
  listen 80;
  server_name app.website.tld;
  client_max_body_size 16M;
  keepalive_timeout 10;

  # Make site accessible from http://localhost/
  root /var/www/[app-name]/app;
  location / {
    proxy_pass http://nodeapps;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
  }
}

Example: Typical nginx configuration at /etc/nginx/sites-available/app-name

Configure redis to run with a nodejs server

redis is a minimalistic yet feature complete in memory key-value data store engine. The need to have redis in addition to a database arises from the need to make realtime possible in clustered/multi-process nodejs deployment. It is possible to run as a standalone or clustered environment.

The location of redis configuration files depend on Operating System distribution, the application is hosted on. In our context, we assume that our operating system is Linux/Unix and redis is installed and configured at /etc/redis.conf.

Some other possible places are /usr/local/redis, /usr/local/etc/redis or any other location depending on how the operating system manages its filesystem.

There is little to no configuration required to run a redis instance, and same configuration data can be passed as arguments at start time.

port 6380
maxmemory 2mb

Example:

To launch redis via CLI, the following command can be typed on the interface: – $ redis-server --port 6380 --slaveof 127.0.0.1 6379 in this configuration, redis starts on localhost as a slave of another instance running on port 6379. – $ redis-server /usr/local/etc/redis.conf in this configuration, redis starts using configuration settings stated in /usr/local/etc/redis.conf.

We recommend reading “How to install redis and “How to install nodejs for more in-depth information that may not be found in current blog post. redis configuration manual

Configure mongodb as a database server for nodejs project

mongodb is a noSQL database engine that covers most use cases a nodejs application can do. It is possible to configure the database in a cluster, as well as standalone mode.

The location of mongodb configuration files depend on Operating System distribution, the database server is hosted on. In our context, we assume that our operating system is Linux/Unix and mongodb is installed and configured at /etc/mongod.conf.

Some other possible places are /usr/local/mongodb, /usr/local/etc/mongodb or any other location depending on how the operating system manages its filesystem. As always, init scripts can be found at //

There is not a lot of configurations to change to run mongodb server. It is possible to start using the service right after installation, with one exception: when running multiple mongodb instances on the same server, or needing replication and shard features.

processManagement:
   fork: true
net:
  - bindIp: localhost
  + bindIp: localhost,10.8.0.10,192.168.4.24,/tmp/mongod.sock
   port: 27017
storage:
  - dbPath: /srv/mongodb
  + dbPath: /custom/path/to/mongodb
systemLog:
   destination: file
  - path: "/var/log/mongodb/mongod.log"
  + path: "/custom/path/to/mongod.log"
  + logRotate: rename
    logAppend: true
storage:
   journal:
      enabled: true
+ security:
   keyFile: /srv/mongodb/keyfile

Example: typical mongodb configuration in /etc/mongod.conf

We recommend reading “How to install mongodb and “How to install nodejs for more in-depth information that may not be found in current blog post, mongodb administration configuration manual

Configure nginx to serve WebSockets with an expressjs and socket.io application

The configuration exposed above does make a nodejs server running on a private network public. However, since the protocol of communication is HTTP, any other protocol, for instance WebSocket, trying to communicated on the same channel will yield an error. To make WebSocket work and using the same port as HTTP(port:80), we will need nginx to upgrade HTTP so that WebSocket messages can pass as well.

The script below executes tasks in following order:

#1 Tells nginx which version to upgrade to #2 Tells nginx to upgrade HTTP to version 1.1 #3 Tells nginx to upgrade upon receiving socket flash request

server{
  #...
  location /{
      proxy_http_version 1.1; #1
      proxy_set_header Upgrade $http_upgrade; #2
      proxy_set_header Connection "upgrade"; #3
  }
}

Example: 3 lines that enable nginx to serve WebSockets

Proxying WebSockets in an nginx configuration is based on ideas of Chriss Lea's blog post Proxying WebSockets with Nginx

Configure upstart to start nodejs application

With configurations we have at this point, the system applications are going to be useable, if we so state from the command line interface. That is, every application will have to be specifically executed.

The issue in production environment is that the terminal has to be closed at some point, once all tasks regarding a command line interface are completed. There is already some services such as init or systemctl that are already shipped with the system. Using upstart for starting and stopping any application is due to its ease of configurations, asynchronous and reactive nature of the tool, that other tools mentioned above lack.

upstart is a free and open source task runner. It was designed with Ubuntu Linux distribution in mind, but can also work on other Linux/Unix distributions. It has expressive task declaration that even newbies can feel comfortable using.

The location of upstart configuration files depend on Operating System distribution the application is hosted on. In our context, we assume that our operating system is Linux/Unix and upstart is installed and configured at /etc/upstart.

Some other possible places are /usr/local/upstart, /usr/local/etc/upstart or any other location depending on how the operating system manages its filesystem.

At the end of a successful configuration of tasks, we should be able to start all of the system applications and services using the following script, either by running one by one, or by making an extra executable file to simplify our task

As a reminder, the command follows following statement sudo service <servicename> <control>, where the service-name is technically our application and will be located at /etc/init/<servicename>.conf, whereas control is one of start/restart or stop keywords

# testing validity of configurations
init-checkconf /etc/init/nginx.conf
init-checkconf /etc/init/redis.conf
init-checkconf /etc/init/mongod.conf
init-checkconf /etc/init/appname.conf

# restart to re-use same script post deployment
service nginx   restart  
service redis   restart  
service mongodb restart  
service appname restart  

Example: tasks to start/restart all deployment applications in appname/bin/start.sh or on a command line

Alternatively, we should be able to stop services either one by one, or all of the services, using the scripts as in the following example

service nginx   stop  
service redis   stop  
service mongodb stop  
service appname stop  

# In case mongod stops to halt 
sudo /usr/bin/mongod -f /etc/mongod.conf --shutdown

Example: tasks to stop applications in appname/bin/stop.sh or on a command line

Now that we know how to launch our services, the problem remains on how do we configure each one of those services we are just running. The following are typical example, aforementioned services can be brought online.

# nginx

description "nginx http daemon"
author "Author Name"

start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]

env DAEMON=/usr/sbin/nginx
env PID=/var/run/nginx.pid

expect fork
respawn
respawn limit 10 5
#oom never

pre-start script
        $DAEMON -t
        if [ $? -ne 0 ]
                then exit $?
        fi
end script

exec $DAEMON

Example: nginx job descriptor in /etc/init/nginx.conf source

The job that will be executed by the redis service is as in the following script.

description "redis server"

start on runlevel [23]
stop on shutdown

pre-stop script
    rm /var/run/redis.pid
end script

script
  echo $$ > /var/run/redis.pid
  exec sudo -u redis /usr/bin/redis-server /etc/redis/redis.conf
end script

respawn limit 15 5

Example: redis job descriptor in /etc/init/redis.conf

If planning to use external monitoring service, respawn limit 15 5 should be either removed, or restart with the monitoring tool the failing service after 15 * 5 seconds time.

The job that will be executed by the mongodb service is as in the following script.

This example is minimalistic, more details can be found on this resource: Github mongod.upstart. Some tuning may be required before use.

#!upstart
description "mongodb server"
author      "author name <author@email>"

start on runlevel [23]
stop on shutdown

pre-stop script
    rm /var/run/mongod.pid
end script

script
  echo $$ > /var/run/mongod.pid
  exec sudo -u mongod /usr/bin/mongod -f /etc/mongod.conf
end script

respawn limit 15 5

Example: mongod job descriptor in /etc/init/mongod.conf

If planning to use external monitoring service, respawn limit 15 5 should be either removed, or restart with the monitoring tool the failing service after 15 * 5 seconds time.

The next and last step in this section, is an example of the script used to start the nodejs server. At this point, any disruption or un-handled problem in the application, will bring down the system as a whole. Other services will be up and running, unfortunately the nodejs server wont! To make failure recovery automatic, we will need yet another tool described in the next section.

#!upstart
description "appname nodejs server"
author      "author name <author@email>"

start on startup
stop on shutdown

script
    export HOME="/var" # this is required by node to be set 
    echo $$ > /var/run/appname.pid
    exec sudo -u appname sh -c "/usr/bin/node /var/www/appname/server.js >> /var/log/appname.log 2>&1"
end script

pre-start script
    # Date format same as (new Date()).toISOString() for consistency
    echo "[`date -u +%Y-%m-%dT%T.%3NZ`] Starting" >> /var/log/appname.log
end script

pre-stop script
    rm /var/run/appname.pid
    echo "[`date -u +%Y-%m-%dT%T.%3NZ`] Stopping" >> /var/log/appname.log
end script

Example: appname job descriptor in /etc/init/mongod.conf

We recommend reading “How to install upstart and “How to install nodejs for more in-depth information that may not be found in current blog post. The upstart event system, what it is and how to use it. On nginx blog: Ubuntu upstart.

Configure monit to monitor nodejs application

The previous section discussed how to automate starting/stopping services. However, when something goes unpredictably wrong, we will not be able to know that something bad happened, nor to tell which system is the culprit. Moreover, we will not be able to recover the from the failure at least by restarting the failing service.

The monitoring tool discussed henceforth, addresses most of the statement made above.

monit is a free and open source monitoring alternative. With a little bit of ingenuity, it is possible to use it to trigger tasks execution, such as sending an alert when something goes off the rails or restarting a failing application.

The location of monit configuration files depend on Operating System distribution the application is hosted on. In our context, we assume that our operating system is Linux/Unix and monit is installed and configured at /etc/monit.

Some other possible places are /usr/local/monit, /usr/local/etc/monit or any other location depending on how the operating system manages its filesystem.

The order of monitoring will go as following:

  • When nginx goes down, we will have to notify the administrator.
  • When redis runs out of memory or goes down, we will force a restart
  • When mongodb may trigger daily backup scripts, may notify when database is down or failed restart attempts
  • When appname goes down|uses abnormally high CPU|run out of memory, we will have to restart and notify the administrator
# The application
check host appname with address 127.0.0.1
    start "/sbin/start appname"
    stop "/sbin/stop appname"
    restart program  = "/sbin/restart appname"
    if failed port 80 protocol http
        request /ok
        with timeout 5 seconds
        then restart
    if cpu > 95% for 2 cycles then alert          # Alert on excessive usage of CPU
    if total cpu > 99% for 10 cycles then restart # Restart if CPU reaches 99 after 10 checks

# Checking using PID 
check process nginx with pidfile /var/run/nginx.pid
    start program = "/etc/init/nginx start"   # service nginx start
    stop program = "/etc/init/nginx stop"     # service nginx stop
    restart program  = "/etc/init/nginx restart"
    if failed port 80 protocol http then restart  # restart when process up, but not answering
    if failed port 443 protocol https then restart

check process redis with pidfile /var/run/redis.pid
    start program = "/etc/init/redis start"   # service redis start
    stop program = "/etc/init/redis stop"     # service redis stop
    if memory > 50 MB then alert
    if total memory > 500 MB then restart

check process mongod with pidfile /var/run/mongod.pid
    start program = "/etc/init/mongod start"   # service mongod start
    stop program = "/etc/init/mongod stop"     # service mongod stop
    restart program  = "/etc/init/mongod restart"
    if failed port 27017 protocol mongo then restart  
    if disk read > 10 MB/s for 2 cycles then alert  # Alert on slow reads 

Example: in /etc/minit/monitrc

To debug validity of /etc/monit/monitrc the following command can be used: monit -t. In case everything looks good, to start services using monit can be done with the following command: monit start all.

There is one more aspect that was not discussed in scripts above, and that is “How does monit knows where to send messages in case of alerting?”. The answer is in the next script, as provided by the monit documentation, but that I feel sharing in this blog post:

# Where to send the email
set alert foo@bar
# What message format 
set mail-format {
      from: Monit Support <monit@foo.bar>
  reply-to: support@domain.com
   subject: $SERVICE $EVENT at $DATE
   message: Monit $ACTION $SERVICE at $DATE on $HOST: $DESCRIPTION.
            Yours sincerely,
            monit
 }
 # Setting the mailserver, in our case, mailgun 
 set mailserver smtp.mailgun.org port 587
  username mailgunusr@domain.com password <PASSWORD>
  using <SSL> with timeout 30 seconds
# <SSL> can be SSLV2 | SSLV3 | TLSV1 | TLSV11 | TLSV12 | TLSV13

Example: custom alert messages in /etc/minit/monitrc

This is an example of few things that can be achieved. There are more that monit can do to enhance the deployment experience, and free of charge. Those things can include, but not limited to, schedule reporting, database backups, purging sessions or accounts that looks not right, as well as sending triggering tasks to send emails.

We recommend reading “How to install monit and “How to install nodejs for more in-depth information that may not be found in current blog post. Quick tutorial on monit. How to install and configure monit, Creating issues when something goes wrong

Conclusion

The two tools that tie the whole system together, also need a system to start and stop them. Luckily, the Linux/Unix environment provides a way to make daemons start at start time.

References

#snippets #configurations #questions #discuss #y2020 #Jan2020