Simple Engineering

TDD

The stream API provides a heavy-weight asynchronous computation model that keeps a small memory footprint. As exciting as it may sound, testing streams is somehow intimidating. This blog layout some key elements necessary to be successful when mocking stream API.

We keep in mind that there is a clear difference between mocking versus stub/spying/fakes even though we used mock interchangeably.

In this article we will talk about:

  • Understanding the difference between Readable and Writable streams
  • Stubbing Writable stream
  • Stubbing Readable stream
  • Stubbing Duplex or Transformer streams

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var  gzip = require('zlib').createGzip();//quick example to show multiple pipings
var route = require('expressjs').Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
        let rstream = fs.createReadStream('./several-TB-of-songs.json'); 
        rstream.
            pipe(new MetadataStreamTransformer()).
            pipe(gzip).
            pipe(res);
        // forwaring the error to next handler     
        rstream.on('error', (error) => next(error, null));
});

At a glance The code is supposed to read a very large JSON file of TB of metadata about songs, apply some transformations, gzip, and send the response to the caller, by piping the results on the response object.

The next example demonstrates how a typical transformer such as MetadataStreamTransformer looks like

const inherit = require('util').inherits;
const Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    this.options = Object.assign({}, options, {objectMode: true});//<= re-enforces object mode chunks
    Transform.call(this, this.options);
}
inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};

Inheritance as explained in this program might be old, but illustrates good enough in a prototypal way that our MetadataStreamTransformer inherits stuff fromStream#Transformer

What can possibly go wrong?

stubbing functions in stream processing scenario may yield the following challenges:

  • How to deal with the asynchronous nature of streams
  • Identify areas where it makes sense to a stub, for instance: expensive operations
  • Identifying key areas needing drop-in replacements, for instance reading from a third party source over the network.

Primer

The keyword when stubbing streams is:

  • To identify where the heavy lifting is happening. In pure terms of streams, functions that executes _read() and _write() are our main focus.
  • To isolate some entities, to be able to test small parts in isolation. For instance, make sure we test MetadataStreamTransformer in isolation, and mock any response fed into .pipe() operator in other places.

What is the difference between readable vs writable vs duplex streams? The long answer is available in substack's Stream Handbook

Generally speaking, Readable streams produce data that can be feed into Writable streams. Readable streams can be .piped on, but not into. Readable streams have readable|data events, and implementation-wise, implement ._read() from Stream#Readable interface.

Writable streams can be .piped into, but not on. For example, res examples above are piped to an existing stream. The opposite is not always guaranteed. Writable streams also have writable|data events, and implementation-wise, implement _.write() from Stream#Writable interface.

Duplex streams go both ways. They have the ability to read from the previous stream and write to the next stream. Transformer streams are duplex, implement ._transform() Stream#Transformer interface.

Modus Operandi

How to test the above code by taking on smaller pieces?

  • fs.createReadStream won't be tested, but stubbed and returns a mocked readable stream
  • .pipe() will be stubbed to return a chain of stream operators
  • gzip and res won't be tested, therefore stubbed to returns a writable+readable mocked stream objects
  • rstream.on('error', cb) stub readable stream with a read error, spy on next() and check if it has been called upon
  • MetadataStreamTransformer will be tested in isolation and MetadataStreamTransformer._transform() will be treated as any other function, except it accepts streams and emits events

How to stub stream functions

describe('/songs', () => {
    before(() => {
        sinon.stub(fs, 'createReadStream').returns({
            pipe: sinon.stub().returns({
                pipe: sinon.stub().returns({
                    pipe: sinon.stub().returns(responseMock)
                })
            }),
            on: sinon.spy(() => true)
        })
    });
});

This way of chained stubbing is available in our toolbox. Great power comes with great responsibilities, and wielding this sword may not always be a good idea.

There is an alternative at the very end of this discussion

The transformer stream class test in isolation may be broken down to

  • stub the whole Transform instance
  • Or stub the .push() and simulate a write by feeding in the readable mocked stream of data

the stubbed push() is a good place to add assertions

it('_transform()', function(){
    var Readable = require('stream').Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, 'push', function(data){
        assert.isNumber(data.id);//testing data sent to callers. etc
        return true;
    });
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    rstream.pipe(tstream);
    expect(tstream.push.called, '#push() has been called');
    mockPush.restore(); 
});

How to Mock Stream Response Objects

The classic example of a readable stream is reading from a file. This example shows how mocking fs.createReadStream and returns a readable stream, capable of being asserted on.

//stubb can emit two or more streams + close the stream
var rstream = fs.createReadStream();
sinon.stub(fs, 'createReadStream', function(file){ 
    //trick from @link https://stackoverflow.com/a/33154121/132610
    assert(file, '#createReadStream received a file');
    rstream.emit('data', "{id:1}");
    rstream.emit('data', "{id:2}");
    rstream.emit('end');
    return false; 
});

var pipeStub = sinon.spy(rstream, 'pipe');
//Once called this above structure will stream two elements: good enough to simulate reading a file.
//to stub `gzip` library: another transformer stream: producing 
var next = sinon.stub();
//use this function| or call the whole route 
getter(req, res, next);
//expectations follow: 
expect(rstream.pipe.called, '#pipe() has been called');

Conclusion

In this article, we established the difference between Readable and Writable streams and how to stub each one of them when unit test.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #TDD #streams #nodejs #mocking

This post highlights snapshots on best practices/hacks, to code, test, deploy and to maintain large-scale nodejs apps. It provides big lines on what became a book on testing nodejs applications.

If you haven't yet, read the How to make nodejs applications modular article. This article is an overall follow-up.

Like some of the articles that came before this one, we are going to focus on a simple question as our north star: What are the most important questions developers have when testing a nodejs application? When possible a quick answer will be provided, else we will point in the right direction where information can be found.

In this article we will talk about:

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express'),
  app = express(),
  server = require('http').createServer(app);
//...
require('./config');
require('./utils/mongodb');
require('./utils/middleware')(app);
require('./routes')(app);
require('./realtime')(app, server)
//...
module.exports.server = server; 

Example:

The code provided here is a recap of How to make nodejs applications modular article. You may need to give it a test drive, as this section highlights an already modularized example.

Testing

Automation is what developers do for a living. Manual testing is tedious, repetitive, and those are two key characteristics of things we love automating. Automated testing is quite intimidating for newbies and veterans alike. Testing tends to be more of an art, the more you practice, the better you hone your craft.

In the blogosphere, – My node Test Strategy ~ RSharper Blog. – nodejs testing essentials

BDD versus TDD

Why should we even test

Testing is unanimous within the developers community, the question always is around how to go about testing.

There is a discussion mentioned in the first chapter between @kentbeck, @martinfowler and @dhh that made rounds on social media, blogs and finally as a subject of reflection in the community. When dealing with legacy code, there should be a balance and only adopt tdd as one tool in our toolbox.

In the book we do the following exercise alternative to classic tdd: read, analyze, modify if necessary, rinse and repeat. We cut the bullshit, and get to test whatever needs to be tested, and let nature take its course.

One thing is clear: We cannot guarantee the sanity of a piece of code unless it is tested. The remaining question is on “How” to go about testing.

There is a summary of the discussions mentioned earlier, titled Is TDD Dead?. In the blogosphere, – BDD-TDD ~ RobotLovesYou Blog. – My node Test Strategy ~ RSharper Blog – A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials

What should be tested

Before we dive into it, lets re-examine pros and cons of automated tests — in the current case, Unit Tests.

Pros:

  • Steer release confidence
  • Prevents common use case and unexpected bugs
  • Help project's new developers better understand code
  • Improves confidence when refactoring code
  • Well tested product guarantees improves customer experience

Cons:

  • Take time to write
  • Increase learning curve

At this point, if we agree that the pros outweigh the cons, we can set an ideal of testing everything. Those are features of a product or functions of code. Re-testing large applications manually are daunting, exhausting, and sometimes simply not feasible.

The good way to think about testing is not by thinking in terms of layers(controllers, models, etc.). Layers tend to be bigger. It is better to think in terms of something much smaller like a function(TDD way) or a feature(BDD way).

Brief, every controller/business logic/utility libraries/nodejs servers/routes all features are also set to be tested ahead of release.

There is an article on this blog that gives more insight on — How to create good test cases (Case > Feature > Expectations | GivenWhenThen) — titled “How to write test cases developers will love reading”. In the blogosphere, – Getting started with nodejs and mocha

Choosing the right testing tools

There is no shortage of tools in nodejs community. The problem is analysis paralysis. Whenever the time comes to choose testing tools, there are layers that should be taken into account: test runners, test doubles, reporting, and eventually, if there is any compiler that needs to be added in the mix.

Other than that, there is a list of a few things to consider when choosing a testing framework: – Learning curve – How easy to integrate into project/existing testing frameworks – How long does it take to debug testing code – Choice of the testing framework, and other testing tools consider – How good is documentation – How big is the community, and how good is the library maintained – What is may solve faster(Spies, Mocking, Coverage reports, etc) – Instrumentation and test reporting, just to name a few.

There are sections dedicated to providing hints and suggestions throughout the book. There is also this article “How to choose the right tools” on this blog that gives a baseline framework to choose, not only for testing frameworks but any tool. Finally, In the blogosphere, – jasmine vs. mocha, chai and sinon. – Evan Hahn has pretty good examples of the use of test doubles in How do I jasmine blog post. – Getting started with nodejs and jasmine – has some pretty amazing examples, and is simple to start with. – Testing expressjs REST APIs with Mocha

Testing servers

The not-so-obvious part when testing servers is how to simulation of starting and stopping the server. These two operations should not bootstrap dependent servers(database, data-stores) or make side effects(network requests, writing to files) to reduce the risk associated with running an actual server.

There is a chapter dedicated to testing servers in the book. There is also this [article on this blog that can give more insights](). In the blogosphere, – How to correctly unit test express server – There is a better code structure organization, that makes it easy to test and get good test coverage on “Testing nodejs with mocha”. – How to correctly unit test express server

Testing modules

Testing modules is not that different from testing a function, or a class. When we start looking at this from this angle, things will be a little easy.

The grain of salt: a module that is not directly a core component of our application, should be left alone and mocked out entirely when possible. This way we keep things isolated.

There are dedicated sections in every chapter about modularization, as well as a chapter dedicated to testing utility libraries(modules) in the book. There is also an entire series of articles — a more theoretical: “How to make nodejs applications modular and a more technical: “How to modularize nodejs applications” — on this blog modularization techniques. In the blogosphere, – Export This: Interface Design Patterns for nodejs Modules Alon Salant, CEO of Good Eggs and nodejs module patterns using simple examples by Darren DeRiderHow to modularize your Chat Application

Testing routes

Challenges while testing expressjs Routes

Some of the challenges associated with testing routes are testing authenticated routes, mocking requests, mocking responses as well as testing routes in isolation without a need to spin up a server. When testing routes, it is easy to fall into integration testing trap, either for simplicity or for lack of motivation to dig deeper.

Integration testing trap is When a developer confuses integration test(or E2E) with unit test, and vice versa. The success of a balanced test coverage identifies sooner the king of tests adequate for a given context, what percentage of each kind of tests.

For a test to be a unit test in route testing context, there will be – Focus to test code block(function, class, etc), not the output of a route – Mock requests to third party systems(Payment Gateway, Email Systems, etc) – Mock database read/write operations – Test worst-case scenario such as missing data and data-structure

There is a chapter dedicated to testing models in the book. There is also this article “Testing expressjs Routes” on this blog that gives more insight on the subject. In the blogosphere – A TDD approach to building a todo API using nodejs and mongodb – Marcus on supertest ~ Marcus Soft Blog

Testing controllers

When modularizing route handlers, there is a realization that they may also be grouped into a layer of their own, or event classes. In MVC jargon, this layer is also known as the controller layer.

Challenges testing controllers, by no surprise, are the same when testing expressjs route handlers. The controller layer thrives when there is a service layer. Mocking database read/write operations, or service layers, that is not core/critical to validation of the controller's expectations are some of such challenges.

Mocking controller request/response objects, and when necessary, some middleware functions.

There is a chapter dedicated to testing controllers in the book. There is also this article Testing nodejs controllers with expressjs framework on this blog that gives more insight on the subject. In the blogosphere, – This article covers Mocking Responses, etc — How to test express controllers.

Testing services

There are some instances where adding a service layer makes sense.

One of those instances is when an application has a collection of single functions under utility(utils). Chances are some of the functions under the utility umbrella may be related in terms of features, the functionality they offer, or both. Such functions are good to use case to be grouped under a class: service

Another good example is for applications that heavily use the model. Chances are the same functions can be re-used in multiple instances, and fixing an issue involves multiple places to fix as well. When that is the case, such functions can be grouped under one banner, in such a way that an update to one function, gets reflected in every instance where the function has been used.

From these two use cases, the testing service has no one-size fit-all testing strategy. Every case of service should be dealt with depending on the context it is operating in.

There is a chapter dedicated to testing services in the book. In the blogosphere, – “Building Structured Backends with nodejs and HexNut” by Francis Stokes ~ aka @fstokesman on Twitter source ...

Testing middleware

The middleware in a sense are hooks that intercept, process and forward the result to the rest of the route in the expressjs (connectjs) jargon. It is by no surprise that testing middleware shares the same challenges as testing route handlers and controllers.

There is a chapter dedicated to testing middleware in the book. There is also this article “Testing expressjs Middleware” on this blog that gives more insight on the subject. In the blogosphere, – How to test expressjs controllers

Testing asynchronous code

Asynchronous code is a wide subject in nodejs community. Things ranging from regular callbacks, promises, async/await constructs, streams, and event streams(reactive) are all under an asynchronous umbrella.

Challenges associated with asynchronous testing, depending on the use case and context at hand. However, there are striking similarities say, testing testing async/await versus a promise.

When an object is available, it makes sense to get a hold on it, execute assertions once it resolves. That is feasible for promises, streams, async/await construct. However, when the object is some kind of event, then the hold on the object can be used to add a listener and assert once the listener is resolved.

There are multiple chapters dedicated to testing asynchronous code in the book. There are also multiple article on this blog that gives more insight on the subject such as – “How to stub a stream function”“How to Stub Promise Function and Mock Resolved Output”“Testing nodejs streams”. In the blogosphere, – []()

Testing models

testing models goes hand in hand with mocking database access functions

Functions that access or change database state can be replaced by spy fakes, custom function replacements capable to supply|emulate similar results as replaced functions.

sinon may not make unanimity, but is a feature-complete battle-tested test double library, amongst many others to choose from.

There is a chapter dedicated to testing models in the book. There is also this article []() on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose modelsstubbing mongoose model question and answers on StackOverflow – Mocking database calls by wrapping mongoose with mockgoose

Testing WebSockets

Some of the challenges testing WebSockets can be summarized as trying to simulate: – sending and receiving a message on the WebSocket endpoint.

There is a chapter dedicated to testing WebSockets in the book. There is also this article on this blog that can give more ideas on how to go about testing WebSocket endpoints — another one on how to integrate WebSockets with nodejs. Elsewhere in the blogosphere, – Testing socket.io with mocha, should.js and socket.io clientsharing session between expressjs and socket.io

Testing background jobs

The background jobs bring batch processing to the nodejs ecosystem. Background jobs constitute a special use case of asynchronous communication that spans time and processes on which the system is running on.

Testing this kind of complex construct, require distilling the fundamental work done by each function/construct, by focusing on the signal without losing the big picture. It requires quite a paradigm shift(word used with reservation).

There is a chapter dedicated to testing background jobs in the book. There is an article Testing nodejs streams on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying mongoose models ~ CodeUtopia Blog

Conclusion

Some source code samples came from QA sites such as StackOverflow, hackers gists, Github documentation, developer blogs, and from my personal projects.

There are some aspects of the ecosystem that are not mentioned, not because they are not important, but because mentioning all of them can fit into a book.

In this article, we highlighted what it takes to test various layers, at the same time make a difference between BDD/TDD testing schools. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #nodejs #testing #tdd #bdd

In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.

In this article we will talk about:

  • Mocking the Server instance
  • Mocking Route's Request/Response objects
  • Modularization of routes and revealing server instance
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getUsers = require('./users/get-user');
router.get('/users/:id', authenticated, getUser);
module.exports = router;

What can possibly go wrong?

When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:

  • Routes should be served at any time while testing
  • Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)
  • Mocking request/response objects to wipe need of a server out of the picture

Testing routes without spinning up a server

The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.


var app = require('express').express(),
  request = require('./support/http');

describe('req .route', function(){
  it('should serve on route /user/:id/edit', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.path).to.equal('/user/:id/edit');
      res.end();
    });

    request(app)
      .get('/user/12/edit')
      .expect(200, done);
  });
  it('should serve get requests', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.method).to.equal('get');
      res.end();
    });

    request(app)
    .get('/user/12/edit')
    .expect(200, done);
  });
});

Example:

example from so and supertest. supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a reasonable alternative. request = require('./support/http') is the utility that may use either of those two libraries to provide a request.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.
  • supertest, nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR "nyc test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Conclusion

To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.

Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.

Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tdd #testing #nodejs #expressjs #server

In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.

In this article we will talk about:

  • Mocking the Server instance
  • Mocking Route's Request/Response objects
  • Modularization of routes and revealing server instance
  • Auto reload(hot reload) using:nodemon, supervisor or forever

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//
var User = require('./models').User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require('express').Router();
var authenticated = require('./middleware/authenticated');
var getUsers = require('./users/get-user');
router.get('/users/:id', authenticated, getUser);
module.exports = router;

What can possibly go wrong?

When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:

  • Routes should be served at any time while testing
  • Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)
  • Mocking request/response objects to wipe need of a server out of the picture

Testing routes without spinning up a server

The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.


var app = require('express').express(),
  request = require('./support/http');

describe('req .route', function(){
  it('should serve on route /user/:id/edit', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.path).to.equal('/user/:id/edit');
      res.end();
    });

    request(app)
      .get('/user/12/edit')
      .expect(200, done);
  });
  it('should serve get requests', function(done){
    app.get('/user/:id/edit', function(req, res){
      expect(req.route.method).to.equal('get');
      res.end();
    });

    request(app)
    .get('/user/12/edit')
    .expect(200, done);
  });
});

Example:

example from so and supertest. supertest spins up a server if necessary. In case we don't want to have a server, then an alternative dupertest can be a reasonable alternative. request = require('./support/http') is the utility that may use either of those two libraries to provide a request.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server:

  • There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes.
  • There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.
  • supertest, nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).

Workflow

If you haven't already, read the “How to write test cases developers will love”

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"
# OR "nyc test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Example: istanbul generates reports as tests progress

Conclusion

To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.

Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.

Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#tdd #testing #nodejs #expressjs #server

Asynchronous computation model makes nodejs flexible to perform heavy computations while keeping a relatively lower memory footprint. The stream API is one of those computation models, this article explores how to approach testing it.

In this article we will talk about:

  • Difference between Readable/Writable and Duplex streams
  • Testing Writable stream
  • Testing Readable stream
  • Testing Duplex or Transformer streams

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

//Read + Transform +Write Stream processing example
var gzip = require('zlib').createGzip(),
    route = require('expressjs').Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
    let rstream = fs.createReadStream('./several-tb-of-songs.json'); 
    rstream.
        .pipe(new MetadataStreamTransformer())
        .pipe(gzip)
        .pipe(res);
    // forwaring the error to next handler     
    rstream.on('error', error => next(error, null));
});

//Transformer Stream example
const inherit = require('util').inherits,
    Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    // re-enforces object mode chunks
    this.options = Object.assign({}, options, {objectMode: true});
    Transform.call(this, this.options);
}

inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};

The example above provides a clear picture of the context in which Readable, Writable, and Duplex(Transform) streams can be used.

What can possibly go wrong?

Streams are particularly hard to test because of their asynchronous nature. That is not an exception for I/O on the filesystem or third-party endpoints. It is easy to fall into the integration testing trap when testing nodejs streams.

Among other things, the following are challenges we may expect when (unit) test streams:

  • Identify areas where it makes sense to stub
  • Choosing the right mock object output to feed into stubs
  • Mock streams read/transform/write operations

There is an article dedicated to stubbing stream functions. Mocking in our case will not go into details about the stubbing parts in the current text.

Choosing tools

If you haven't already, reading “How to choose the right tools” blog post gives insights on a framework we used to choose the tools we suggest in this blog.

Following our own “Choosing the right tools” framework. They are not a suggestion, rather the ones that made sense to complete this article:

  • We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. mocha was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete.
  • The stack mocha, chai, and sinon (assertion and test doubles libraries) worth a shot.
  • node-mocks-http framework for mocking HTTP Request/Response objects.
  • Code under test is instrumented to make test progress possible. Test coverage reporting we adopted, also widely adopted by the mocha community, is istanbul.

Workflow

It is possible to generate reports as tests progress.

latest versions of istanbul uses the nyc name.

# In package.json at "test" - add next line
> "istanbul test mocha -- --color --reporter mocha-lcov-reporter specs"

# Then run the tests using 
$ npm test --coverage 

Show me the tests

If you haven't already, read the “How to write test cases developers will love”

We assume we approach testing of fairly large nodejs application from a real-world perspective, and with refactoring in mind. The good way to think about large scale is to focus on smaller things and how they integrate(expand) with the rest of the application.

The philosophy about test-driven development is to write failing tests, followed by code that resolves the failing use cases, refactor rinse and repeat. Most real-world, writing tests may start at any given moment depending on multiple variables one of which being the pressure and timeline of the project at hand.

It is not a new concept for some tests being written after the fact (characterization tests). Another case is when dealing with legacy code, or simply ill-tested code base. That is the case we are dealing with in our code sample use case.

The first thing is rather reading the code and identify areas of improvement before we start writing the code. And the clear improvement opportunity is to eject the function getter() out of the router. Our new construct looks as the following: route.get('/songs', getter); which allows to test getter() in isolation.

Our skeleton looks a bit as in the following lines.

describe('getter()', () => {
  let req, res, next, error;
  beforeEach(() => {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) => { json: sinon.spy() }}
  });
    //...
});

Let's examine the case where the stream is actually going to fail.

Note that we lack a way to get the handle on the stream object, as the handler does not return any object to tap into. Luckily, the response and request objects are both instances of streams. So a good mocking can come to our rescue.


//...
let eventEmitter = require('events').EventEmitter,
  httpMock = require('node-mocks-http'),

//...
it('fails when no songs are found', done => {
    var self = this; 
    this.next = sinon.spy();
    this.req = httpMock.createRequest({method, url, body})
    this.res = httpMock.createResponse({eventEmitter: eventEmitter})
    
    getter(this.req, this.res, this.next);
    this.res.on('error', function(error){
        assert(self.next.called, 'next() has been called');
        done(error);
    });
});

Mocking both request and response objects in our context makes more sense. Likewise, we will mock response cases of success, the reader stream's fs.createReadStream() has to be stubbed and make it eject a stream of fake content. this time, this.res.on('end') will be used to make assertions.

Conclusion

Automated testing streams are quite intimidating for newbies and veterans alike. There are multiple enough use cases in the book to get you past that mark.

In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing streams is particularly challenging especially when a read/write is involved. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #tdd #streams #nodejs #mocking

This article revisits essentials on how to install upstart an event based daemon for starting/stopping tasks on development and production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

There are a plethora of task execution solutions, for instance systemd and init, rather complex to work with. That makes upstart a good alternative to such tools.

In this article you will learn about:

  • Tools available for task execution
  • How to install upstart task execution
  • How to write basic upstart task

Installing upstart on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a upstart on Linux using apt

Installing upstart on macOS

upstart is a utility designed mainly for Linux systems. However, macOS has its equivalent, launchctl designed to stop/stop processes prior/after the system restarts.

Installing upstart on a Windows machine

Whereas macOS systems and Linux are quite relax when it comes to working with system processes, Windows is a beast on its own way. upstart was built for *nix systems but there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing.

Automated upgrades

Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades.

Following the SemVer ~ aka Semantic Versioning standard, it is recommended that the only pair minor versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches do not introduce breaking changes. Those can therefore be automated.

In case of a critical infrastructure piece of processes state management calibre, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Upstart provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install upstart on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.

References

#nodejs #homebrew #UnattendedUpgrades #nginx #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications

This article revisits essentials on how to install monit monitoring system on production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

There are a plethora of monitoring and logging solutions around the internet. This article will not focus on any of those, rather provide alternatives using tools already available in Linux/UNIX environments, that may achieve near same capabilities as any of those solutions.

In this article you will learn about:

  • Difference between logging and monitoring
  • Tools available for logging
  • Tools available for monitoring
  • How to install monitoring and logging tools
  • How to connect end-to-end reporting for faster response times.

Installing monit on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a monit on Linux using apt

Installing monit on macOS

In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available.

homebrew is a package manager and software installation tool that makes most developer tools installation a breeze.

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Example: installation instruction as provided by brew.sh

Generally speaking, this is how to install/uninstall things with brew

$ brew install wget 
$ brew uninstall wget 

Example: installing/uninstalling wget binaries using homebrew

We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.

It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.

$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update <formula>           # Updates one formula

$ brew install <formula@version>    # Installs <formula> at a particular version.
$ brew tap <formular@version>/brew  # Installs <formular> from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap <formular> && brew tap <formula>   
$ brew services start <formular>@<version>

Example: key commands to work with homebrew cli

For more informations, visit: Homebrew ~ FAQ.

Installing a monit on a macOS using homebrew

It is hard to deny the supremacy of monit on *NIX systems, and that doesn't exclude macOS systems. Installation of monit on macOS using homebrew aligns with homebrew installation guidelines. From above templates, the next example displays how easy it is to have monit up and running.

$ brew install monit        # Installation of latest monit
$ brew services start monit # Starting latest monit as a service 

Example: installing monit using homebrew

Installing monit on a Windows machine

Whereas macOS systems and Linux are quite relax when it comes to interacting with processes, Windows is a beast on its own way. monit was built for *nix systems but there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing.

Automated upgrades

Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.

In case of a critical infrastructure piece that is monitoring, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Monit is a well thought software that provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install monit on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.

Reading list and References

#nodejs #homebrew #UnattendedUpgrades #monit #y2020 ,#Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications

This article revisits essentials on how to install nginx non blocking single threaded multipurpose web server on development and production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

Installing nginx on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a nginx on Linux using apt

Updating/Upgrading or first install of nginx server can be achieved with by the following commands.

$ sudo add-apt-repository ppa:nginx/stable
$ sudo apt-get update 
$ sudo apt-get install - nginx 

# To restart the service:
$ sudo service nginx restart 

Example: updating PPA and installing nginx binaries

Adding nginx PPA in first step is only required for first installs, on a system that does not have the PPA available in the system database.

Installing nginx on macOS

In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available.

homebrew is a package manager and software installation tool that makes most developer tools installation a breeze.

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Example: installation instruction as provided by brew.sh

Generally speaking, this is how to install/uninstall things with brew

$ brew install wget 
$ brew uninstall wget 

Example: installing/uninstalling wget binaries using homebrew

We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.

It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.

$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update <formula>           # Updates one formula

$ brew install <formula@version>    # Installs <formula> at a particular version.
$ brew tap <formular@version>/brew  # Installs <formular> from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap <formular> && brew tap <formula>   
$ brew services start <formular>@<version>

Example: key commands to work with homebrew cli

For more informations, visit: Homebrew ~ FAQ.

Installing a nginx on a Mac using homebrew

$ brew install nginx@1.17.8  # as in <formula>@<version>

Example: installing nginx using homebrew

Installing nginx on a Windows machine

MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a nodejs environment on a Mac. This is an easy target as nginx gives windows binaries that we can download and install on a couple of clicks.

Automated upgrades

Before we dive into automatic upgrades, we should consider nuances associated to managing an nginx deployment. The updates fall into two major, quite interesting, categories: patch updates and version upgrades.

Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.

In case of a WebServer, breaking changes may be introduced when a critical configuration setting is added, or dropped between two successive versions.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install nginx on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.

Reading list

#nodejs #homebrew #UnattendedUpgrades #nginx #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications

This article revisits essentials on how to install mongodb, one of leading noSQL databases on development and production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help those who bought the book to setup their working environment, as well as the wide audience of software developers who need same information.

In this article we will talk about:

__

  • How to install mongodb on Linux, macOS and Windows.
  • How to stop/start automatically prior/after system restarts

We will not talk about:

__

  • How to configure mongodb for development and production, as that is subject of another article worth visiting.
  • How to manage mongodb in a production environment, either in a containerized or standalone contexts.
  • How to load mongodb on Docker and Kubernetes

Installing mongodb on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update        # Fetch list of available updates
$ apt-get upgrade       # Upgrades current packages
$ apt-get dist-upgrade  # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a mongodb on Linux using apt

Updating/Upgrading or first time fresh install of mongodb can follow next scripts.

sudo may be skipped if the current user has permission to write and execute programs

# Add public key used by the aptitude for further updates
# gnupg should be available in the system
$ apt-get install gnupg 
$ wget -qO - https://www.mongodb.org/static/pgp/server-3.6.asc | sudo apt-key add - 

# create and add list for mongodb (version 3.6, but variations can differ from version to version, the same applies to architecture)
$ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list

# Updating libraries and make the actual install 
$ sudo apt-get update
$ sudo apt-get install -y mongodb-org

# To install specific version(3.6.17 in our example) of mongodb, the following command helps
$ sudo apt-get install -y mongodb-org=3.6.17 mongodb-org-server=3.6.17 mongodb-org-shell=3.6.17 mongodb-org-mongos=3.6.17 mongodb-org-tools=3.6.17

Example: adding mongodb PPA binaries and installing a particular binary version

It is always a good idea to upgrade often. Breaking changes happen on major/minor binary updates, but less likely on patch upgrades. The versions goes by pair numbers, so 3.2, 3.4, 3.6 etc. The transition that skips two version may be catastrophic. For example upgrades from any 3.x to 3.6, for this to work, there should be upgraded to an intermediate update from 3.x to 3.4, after which the update from 3.4 to 3.6 becomes possible.

# Part 1
$ apt-cache policy mongodb-org          # Checking installed MongoDB version 
$ apt-get install -y mongodb-org=3.4    # Installing 3.4 MongoDB version 

# Part 2   
# Running mongodb
$ sudo killall mongod && sleep 3 && sudo service mongod start
$ sudo service mongodb start           

# Part 3 
$ mongo                                 # Accessing to mongo CLI

# Compatible Mode
$ > db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )  
$ > exit

# Part 3 
$ sudo apt-get install -y mongodb-org=3.6   # Upgrading to latest 3.6 version 
# Restart Server + As in Part 2.

Example: updating mongodb binaries and upgrading to a version

Installing mongodb on macOS

In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available.

homebrew is a package manager and software installation tool that makes most developer tools installation a breeze. We should also highlight that homebrew requires xcode to be available on the system.

$ /usr/bin/ruby -e \
    "$(curl -fsSL https://raw.githubusercontent.com \
    /Homebrew/install/master/install)"

Example: installation instruction as provided by brew.sh

Generally speaking, this is how to install and uninstall things with brew

$ brew install wget 
$ brew uninstall wget 

Example: installing/uninstalling wget binaries using homebrew

We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.

It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.

$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update <formula>           # Updates one formula

$ brew install <formula@version>    # Installs <formula> at a particular version.
$ brew tap <formular@version>/brew  # Installs <formular> from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap <formular> && brew tap <formula>   
$ brew services start <formular>@<version>

Example: key commands to work with homebrew cli

For more informations, visit: Homebrew ~ FAQ.

Installing a mongodb on a Mac using homebrew

$ brew tap mongodb/brew 
$ brew install mongodb-community@3.6
$ brew services start mongodb-community@3.6 # start mongodb as a mac service 

Example: Install and running mongodb as a macOS service

Caveats ~ We have extra steps to make in order to start/stop automatically when the system goes up/down. This step is vital when doing development on macOS , which does not necessarily needs Linux production bound task runners.

# To have launchd start mongodb at login:
$ ln -s /usr/local/opt/mongodb/*.plist ~/Library/LaunchAgents/

# Then to load mongodb now:
$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist

# To unregister and stop the service, use the following command
$ launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist

# When not want/need launchctl this command works fine
$ mongod 

Example: Stop/Start when the system stops/starts

Installing mongodb on a Windows machine

Whereas MacOs, and most Linux distributions, come with Python and Ruby already enabled, It takes extra mile for Windows to make those two languages available. We have to stress on the fact that those two languages are somehow required to deploy a mongodb environment on most platforms, especially when working with nodejs.

The bright side of this story is that mongodb provides windows binaries that we can downloaded and installed in a couple of clicks.

Automated upgrades

Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades.

Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install mongodb on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out in the article. There are areas we wish we added more coverage, probably not on the first version of this article.

Some of other places where people are wondering how to install mongodb on various platforms include, but not limited to Quora, StackOverflow and Reddit.

Reading list

#nodejs #homebrew #UnattendedUpgrades #mongodb #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications

This article revisits essentials on how to install redis key/value data store on development and production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

Installing redis on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a redis on Linux using apt

Updating redis – In case the above global update didn't take effect. The -y option passes YES, so there are no prompt for Y/N? while installing libraries.

# Installing binaries 
$ curl -O http://download.redis.io/redis-stable.tar.gz
$ tar xzvf redis-stable.tar.gz && cd redis-stable 
$ make && make install
# Configure redis - for first time installs 

# Install via PPA 
$ apt-get install -y python-software-properties #provides access to add-apt-repository 
$ apt-get install -y software-properties-common python-software-properties

# @link https://packages.ubuntu.com/bionic/redis 
$ add-apt-repository -y ppa:bionic/redis # Alternatively rwky/redis
$ apt-get update
$ apt-get install -y redis-server

# Starting Redis for development on Mac
$ redis-server /usr/local/etc/redis.conf 

Example: installing redis binaries with aptitude

Installing redis on a Mac system

In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available.

homebrew is a package manager and software installation tool that makes most developer tools installation a breeze.

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Example: installation instruction as provided by brew.sh

Generally speaking, this is how to install/uninstall things with brew

$ brew install wget 
$ brew uninstall wget 

Example: installing/uninstalling wget binaries using homebrew

We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.

It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.

$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update <formula>           # Updates one formula

$ brew install <formula@version>    # Installs <formula> at a particular version.
$ brew tap <formular@version>/brew  # Installs <formular> from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap <formular> && brew tap <formula>   
$ brew services start <formular>@<version>

Example: key commands to work with homebrew cli

For more informations, visit: Homebrew ~ FAQ.

Installing a redis on a macOS using curl

Installing redis via a curl command is not that different as on Linux system. The following instructions can accomplish the installation.

# Installing binaries 
$ curl -O http://download.redis.io/redis-stable.tar.gz
$ tar xzvf redis-stable.tar.gz && cd redis-stable 
$ make && make install
# Configure redis - for first time installs 

Example: install redis binaries using curl and make

Installing a redis on a Mac using homebrew

$ brew install redis        # Installation following <formula>@<version> template 
$ brew services start redis # Starting redis as a service  

# Alternatively start as usual 
$ redis-server /usr/local/etc/redis.conf
# Running on port: 6379

Example: install redis binaries using homebrew

Installing redis on a Windows machine

MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a nodejs environment. This is an easy target as redis gives windows binaries that we can download and install on a couple of clicks.

Automated upgrades

Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades.

Following the SemVer ~ aka Semantic Versioning standard, it is recommended that the only pair minor versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches do not introduce breaking changes. Those can therefore be automated.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install redis on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.

Reading list

#nodejs #homebrew #UnattendedUpgrades #nvm #n #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications