Simple Engineering

nodejs

The configuration is one of the software component layers, and as such, should be testable in isolation like any other component. Modularization of the configuration layer improves its reusability and testability. The question we should be asking is How do we get there, and that is the objective of this article.

The 12 App Factor, a collection of good practices, advocates for “strict separation of configuration from code” and “storing configuration in environment variables”, among other things.

The 12 App Factor challenges the status quo, when it comes to configuration management. The following paragraph taken verbatim from the documentation is a clear illustration of that fact.

“A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.” ~ verbatim text from 12 App Factor ~ config section

In this article we will talk about:

  • Differentiation of configuration layers
  • How to decouple code from configuration
  • How to modularize configuration for testability
  • How to prevent configurations key leaks in public space

Techniques and ideas discussed in this blog, are available in more detail in “Configurations” chapter of the “Testing nodejs Applications” book. You can grab a copy on this link.

Show me the code

const Twitter = require('twitter');

function TwitterClient() {

    this.client = new Twitter({
        consumer_key: `Plain Text Twitter Consumer Key`,
        consumer_secret: `Plain Text Twitter Consumer Secret`,
        access_token_key: `Plain Text Twitter Access Token Key`,
        access_token_secret: `Plain Text Twitter Access Token Secret`
    });

    //accounts such as : @TechCrunch, @Twitter, etc 
    this.track = Array.isArray(accounts) ? accounts.join(',') : accounts;
    //ids: corresponding Twitter Accounts IDs 816653, 783214, etc  
    this.follow = Array.isArray(ids) ? ids.join(',') : ids;
}

/**
 * <code>
 * let stream = new TwitterClient('@twitter', '783214').getStream();
 * stream.on('error', error => handleError(error));
 * stream.on('data', tweet => logTweet(tweet));
 * </code>
 * @name getStream - Returns Usable Stream
 * @returns {Object<TwitterStream>}
 */
TwitterClient.prototype.getStream = function(){
    return this.client.stream('statuses/filter', {track: this.track, follow: this.follow});
};

Example:

What can possibly go wrong?

When trying to figure out how to approach modularizing of configurations, the following points may be a challenge:

  • Being able to share the source code without leaking public keys to the world
  • Laying down a strategy to move configurations into configuration files
  • Making configuration settings as testable as any module.

The following sections will explore more on making points stated above work.

Layers of configuration of nodejs applications

Although this blog article provides basic understanding of configuration modularization, it defers configuration management to another blog post: Configuring nodejs applications”.

From a production readiness perspective, at least in the context of this blog post, there are two distinct layers of application configurations.

The first layer consists of configurations that nodejs application needs to execute intrinsic business logic. They will be referred to as environment variables/settings. Third-party issued secret keys or server port number configurations, fall under this category. In most cases, you will find such configurations in static variables found in the application.

The second layer consists of configurations required by a system that is going to host the nodejs application. Database server settings, monitoring tools, SSH keys, and other third-party programs running on the hosting entity, are few examples that fall under this category. We will refer to these as system variables/settings.

This blog will be about working with the first layer: environment settings.

Decoupling code from configuration

The first step in decoupling configuration from code is to identify and normalize the way we store our environment variables.

module.exports = function hasSecrets(){
    const SOME_SECRET_KEY = 'xyz=';
    ...
};

Example: function with an encapsulated secret

The previous function encapsulates secret values that can be moved outside the application. If we apply this technique, SOME_SECRET_KEY will be moved outside the function, and imported whenever needed instead.

const SOME_SECRET_KEY = require("./config").SOME_SECRET_KEY;

module.exports = function hasSecrets(){
    ...
};

Example: function with a decoupled secret value

This process has to be repeated all over the application, till every single secret value is replaced with its constant equivalent. It doesn't have to be good on the first try, it has simply to work. We can make it better later on.

Configuration modularization

For curiosity's sake, how does the config.js looks like, after “decoupling configuration from code” step would look like, at the end of the exercise?

export const SOME_SECRET_KEY = 'xyz=';

Example: the first iteration of decoupling configuration from code

This step works but has essentially two key flaws:

  • In a team of multiple players, each player having its own environment variables, the config.js will become a liability. It doesn't scale that well.
  • This strategy will not prevent catastrophe of leaking the secret to the public, in case the code becomes open source.

To mitigate this, we are going to introduce After normalization of the way we store and retrieve environment variables, the next step is how to organize the results in a module. Modules are portable and easy to test.

Modularization makes it possible to test configuration in isolation. Yes, we will have to prove to ourselves it works, before we convince others that it does!

Measures to prevent private key leakage

The first line of defense when it comes to preventing secret keys from leaking to the public is to make sure not a single private value is stored in the codebase itself. The following example illustrates this statement.

module.exports = function leakySecret(){
    const SOME_SECRET_KEY = 'xyz=';
    ...
};

Example: function with a leak-able secret key

The second line of defense is to decouple secret values from an integral part of the application, and use an external service to provision secret values at runtime. nodejs makes it possible to read the process content.

A simple yet powerful tool is dotenv library. This library can be swapped, depending on taste or project requirements.

One of the alternatives to dotenv includes convict.js.

Last but not least, since we are using git, to add .gitignore prevents contributors to commit their .env files by accident to the shared repository.

dotenv-extended makes it possible to read *nix variables into a dotenv file.

require('dotenv').config();
const Twitter = require('twitter');

function TwitterClient(accounts, ids) {
    this.client = new Twitter({
        consumer_key: process.env.TWITTER_CONSUMER_KEY,
        consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
        access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
        access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET
    });
    ...
}

Example: preventing .env files from being checked into the central repository

Conclusion

Modularization is key to crafting re-usable composable software components. The configuration layer is not an exception to this rule. Modularization of configurations brings elegance, ease of management of critical information such as security keys.

In this article, we re-asserted that with a little bit of discipline, without breaking our piggy bank, it is still possible to better manage application configurations. Modularization of configuration makes it possible to reduce the risk of secret key leaks as well increasing testability readiness. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #nodejs #configuration

The ever-growing number of files does not spare test files. The number of similar test double files can be used as an indication of a need to refactor or modularize, test doubles. This blog applies the same techniques we used to modularize other layers of a nodejs application, but in an automated testing context.

In this article we will talk about:

  • The need to have test doubles
  • How utilities library relates to fixtures library
  • Reducing repetitive imports via a unified export library
  • How to modularize fixtures of spies
  • How to modularize fixtures of mock data
  • How to modularize fixtures of fakes
  • How to modularize fixtures of stubs
  • How to modularize test doubles for reusability
  • How to modularize test doubles for composability

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var should = require('should');
var expect = require('expect');
var chai = require('chai');

Example:

What can possibly go wrong?

The following points may be a challenge when modularizing test doubles:

  • Some testing libraries share dependencies with the project they are supposed to tests
  • Individual test doubles can be replicated in multiple places
  • With this knowledge, How can we reduce the waste and reuse most of the dependencies?

In the next sections, we make a case on modularization for reusability as a solution to reduce code duplication.

The Status Quo

Every test double library is in fact an independent library. That remains true even when some libraries are bundled and shipped together, as is the case for chai(ships with should and expect). Every mock, every spy, and every stub we make in one place can potentially be replicated to multiple other places that test similar code-blocks, or code-blocks that share dependencies.

One of the solutions to share common test double configurations across multiple test cases is to organize test doubles in modules.

The need to have test doubles in tests.

In these series, there is one blog that discusses the difference between various test doubles: spy/mock/stubs/fake and fixtures. For the sake of brevity, that will not be our concern for the moment. Our concern is to reflect on the why we should have test doubles in the first place.

From the time and cost perspective, It takes longer to load one single file. It would take even longer to load multiple files, be in parallel or sequentially. The higher the number of test cases spanning multiple files, the slower the test runner process will take to complete execution. This adds more execution time, to an already slow process.

If there is one of amongst other improvements that would save us time, reusing the same library quite often while mimicking implementation of other things we don't really need to load(mocking/etc.), would be one of them.

Testing code acts as a state machine, or pure functions, every input results in the same output. Test doubles are essentially tools that can help us save time and cost as a drop-in replacement of expected behaviors.

How utilities relate to fixtures

In this section, we pause a little bit to answer the question: “How utilities library relates to fixtures library”.

Utility libraries(utilities) provide some tools that are not necessarily related to the core business of the program, but necessary to complete a set of tasks. The need to have utilities is not limited to business logic only, but also to testing code. In the context of tests, the utilities are going to be referred to as fixtures. Fixtures can have computations or data that emulates a state under which the program has to be tested.

Grouping imports with unified export library

The module system provided by the nodejs is a double-edged sword. It presents opportunities to create granular systems, but repetitive imports weakness the performance of the application.

To reduce repetitive imports, we make good use of the index. This compensates for our rejection to attach modules to the global object. It also makes it possible to abstract away the file structure: one doesn't have to know the whole project's structure to import just one single function.

How to modularize fixtures of spies

The modularization of spies takes one step in general. Since the spies already have a name, It makes sense to group them under the fixture library, by category or feature, and export the resulting module. The use of the index file makes it possible to export complex file systems via one single import(or export depending on perspective).

How to modularize fixtures of mock data

Mock data is the cornerstone to simulate desired test state when one kind of data is injected into a function/system. Grouping related data under the same umbrella makes sense in most cases. After the fact, it makes sense to manifest data via export constructs.

How to modularize fixtures of fakes

Fakes are functions similar to implementation they are designed to replace, most of the time third-party functionality, that can be used to simulate original behavior. When two or more fakes share striking similarities, they become good candidates for mergers, refactoring, and modularization.

How to modularize fixtures of stubs

Stubs are most of the time taken as mocks. That is because they tend to operate in similar use cases. A stub is a fake that replaces real implementations, and capable of receiving and producing a pre-determined outcome using mock data. The modularization will take a single step, in case the stub is already named. The last step is to actually export and reveal/expose the function as an independent/exportable function.

How to modularize test doubles for reusability

Test doubles are reusable in nature. There is no difference between designing functions/classes and test doubles for reusability per se. To be able to reuse a class/function, that function has to be exposed to the external world. That is where export construct comes into the picture.

How to modularize test doubles for composability

The composability on the other side is the ability for one module to be reusable. For that to happen, the main client that is going to be using the library has to be injected into the library, either via a thunk or similar strategy. The following example shows how two or more test doubles can be modularized for composability.

Some Stubbing questions we have to keep in mind – How do Stubbing differ from Mocking – How to Stubbing differs from Spying: Spies/Stubs functions with pre-programmed behavior – How to know if a function has been called with a specific argument?: For example: I want to know the res.status(401).send() — more has been discussed in this blog as well: spy/mock/stubs/fake and fixtures

Making chai, should and expect accessible

The approach explained below makes it possible to make pre-configured chai available in a global context, without attaching chai explicitly to the global Object.

  • There are multiple ways to go with modularization, but the most basic is using exports.
  • This technique will not make any library default but is designed to reduce the boilerplate when testing.
var chai = require('chai');
module.exports.chai = chai; 
module.exports.should = chai.should; 
module.exports.expect = chai.expect; 

Example:

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings elegance, improves performance, and in this case, re-usability of test doubles across the board.

In this article, we revisited how to test double modularization can be achieved by leveraging the power of module.exports( or export in ES7+). The ever-increasing number of similar test double instances make them good candidates to modularize, at the same time makes it is imperative that the modularization has to be minimalistic. That is the reason why we leveraged the index file to make sure we do not overload already complex architectures. There are additional complimentary materials in the “Testing nodejs applications” book, on this very same subject.

References

tags: #snippets #nodejs #spy #fake #mock #stub #test-doubles #question #discuss

The stream API provides a heavy-weight asynchronous computation model that keeps a small memory footprint. As exciting as it may sound, testing streams is somehow intimidating. This blog layout some key elements necessary to be successful when mocking stream API.

We keep in mind that there is a clear difference between mocking versus stub/spying/fakes even though we used mock interchangeably.

In this article we will talk about:

  • Understanding the difference between Readable and Writable streams
  • Stubbing Writable stream
  • Stubbing Readable stream
  • Stubbing Duplex or Transformer streams

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var  gzip = require('zlib').createGzip();//quick example to show multiple pipings
var route = require('expressjs').Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get('/songs' function getter(req, res, next){
        let rstream = fs.createReadStream('./several-TB-of-songs.json'); 
        rstream.
            pipe(new MetadataStreamTransformer()).
            pipe(gzip).
            pipe(res);
        // forwaring the error to next handler     
        rstream.on('error', (error) => next(error, null));
});

At a glance The code is supposed to read a very large JSON file of TB of metadata about songs, apply some transformations, gzip, and send the response to the caller, by piping the results on the response object.

The next example demonstrates how a typical transformer such as MetadataStreamTransformer looks like

const inherit = require('util').inherits;
const Transform = require('stream').Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    this.options = Object.assign({}, options, {objectMode: true});//<= re-enforces object mode chunks
    Transform.call(this, this.options);
}
inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === 'string' ? chunk : chunk.toString('utf8'));
    this.push({id: (data || {}).id || random() });
    if(typeof next === 'function') next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === 'function') {next();}
};

Inheritance as explained in this program might be old, but illustrates good enough in a prototypal way that our MetadataStreamTransformer inherits stuff fromStream#Transformer

What can possibly go wrong?

stubbing functions in stream processing scenario may yield the following challenges:

  • How to deal with the asynchronous nature of streams
  • Identify areas where it makes sense to a stub, for instance: expensive operations
  • Identifying key areas needing drop-in replacements, for instance reading from a third party source over the network.

Primer

The keyword when stubbing streams is:

  • To identify where the heavy lifting is happening. In pure terms of streams, functions that executes _read() and _write() are our main focus.
  • To isolate some entities, to be able to test small parts in isolation. For instance, make sure we test MetadataStreamTransformer in isolation, and mock any response fed into .pipe() operator in other places.

What is the difference between readable vs writable vs duplex streams? The long answer is available in substack's Stream Handbook

Generally speaking, Readable streams produce data that can be feed into Writable streams. Readable streams can be .piped on, but not into. Readable streams have readable|data events, and implementation-wise, implement ._read() from Stream#Readable interface.

Writable streams can be .piped into, but not on. For example, res examples above are piped to an existing stream. The opposite is not always guaranteed. Writable streams also have writable|data events, and implementation-wise, implement _.write() from Stream#Writable interface.

Duplex streams go both ways. They have the ability to read from the previous stream and write to the next stream. Transformer streams are duplex, implement ._transform() Stream#Transformer interface.

Modus Operandi

How to test the above code by taking on smaller pieces?

  • fs.createReadStream won't be tested, but stubbed and returns a mocked readable stream
  • .pipe() will be stubbed to return a chain of stream operators
  • gzip and res won't be tested, therefore stubbed to returns a writable+readable mocked stream objects
  • rstream.on('error', cb) stub readable stream with a read error, spy on next() and check if it has been called upon
  • MetadataStreamTransformer will be tested in isolation and MetadataStreamTransformer._transform() will be treated as any other function, except it accepts streams and emits events

How to stub stream functions

describe('/songs', () => {
    before(() => {
        sinon.stub(fs, 'createReadStream').returns({
            pipe: sinon.stub().returns({
                pipe: sinon.stub().returns({
                    pipe: sinon.stub().returns(responseMock)
                })
            }),
            on: sinon.spy(() => true)
        })
    });
});

This way of chained stubbing is available in our toolbox. Great power comes with great responsibilities, and wielding this sword may not always be a good idea.

There is an alternative at the very end of this discussion

The transformer stream class test in isolation may be broken down to

  • stub the whole Transform instance
  • Or stub the .push() and simulate a write by feeding in the readable mocked stream of data

the stubbed push() is a good place to add assertions

it('_transform()', function(){
    var Readable = require('stream').Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, 'push', function(data){
        assert.isNumber(data.id);//testing data sent to callers. etc
        return true;
    });
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    rstream.pipe(tstream);
    expect(tstream.push.called, '#push() has been called');
    mockPush.restore(); 
});

How to Mock Stream Response Objects

The classic example of a readable stream is reading from a file. This example shows how mocking fs.createReadStream and returns a readable stream, capable of being asserted on.

//stubb can emit two or more streams + close the stream
var rstream = fs.createReadStream();
sinon.stub(fs, 'createReadStream', function(file){ 
    //trick from @link https://stackoverflow.com/a/33154121/132610
    assert(file, '#createReadStream received a file');
    rstream.emit('data', "{id:1}");
    rstream.emit('data', "{id:2}");
    rstream.emit('end');
    return false; 
});

var pipeStub = sinon.spy(rstream, 'pipe');
//Once called this above structure will stream two elements: good enough to simulate reading a file.
//to stub `gzip` library: another transformer stream: producing 
var next = sinon.stub();
//use this function| or call the whole route 
getter(req, res, next);
//expectations follow: 
expect(rstream.pipe.called, '#pipe() has been called');

Conclusion

In this article, we established the difference between Readable and Writable streams and how to stub each one of them when unit test.

Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #TDD #streams #nodejs #mocking

Sometimes, changes in code involves changes in models. Fields can be added or removed depending on the requirements at hand. This blog post explores some techniques to make versioning work with mongodb models.

There is a more generalist Database Maintenance, Data Migrations, and Model Versioning article that goes beyond mongodb models.

In this article we will talk about:

  • Model versioning strategies
  • Avoiding model versioning colliding with database engine upgrades
  • Migration strategy for model upgrades with schema change
  • Migration strategy for models with hydration
  • Tools that makes model migrations easier

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

These snippets illustrate the evolution of one fictitious UserSchema. A schema describes how a model will look like once compiled and ready to be used with the mongodb database engine.

//Data Model Version 1.0.0
var UserSchema = new mongoose.Schema({name: String});

//Data Model Version 1.0.1
var UserSchema = new mongoose.Schema({name: String, email: String});

//Data Model Version 1.1.0
var UserSchema = new mongoose.Schema({
    name: String, 
    email: {type: String, required: true}
});

//Data Model Version 2.0.0
var UserSchema = new mongoose.Schema({ 
    name: {first: String, last: String},
    addresses: [Address],
    orders: [Order]
});

module.exports = mongoose.model('User', UserSchema);

Example: Evolution of a mongoose data model

What can possibly go wrong?

It common to execute software updates in a bulk, especially when the application is a monolith. The term bulk is used for lack of a better word, but the idea behind it can be resumed in a need to update data models, coupled with data hydration to new data models, with a potential of updating the database engine, all of those tasks at the same time.

It becomes clear that, when we have to update more than two things at the same time, complex operations will get involved, and the more complex the update gets, the nastier the problem will become.

When trying to figure out how to approach either migration from one model version to the next, from one low/high level ORM/ODM(mongoose, knex, sequelize) version to the next, from one database engine version to the next, or from one database driver version to the next, we should always keep in our mind some of these challenges(Questions):

  • When is it the right time to do a migration
  • How to automate data transformations from one model version to the next
  • What is the difference between update, and upgrade in our particular context
  • What are the bottlenecks(moving parts) for the current database update/upgrade
  • How can we align model versioning, data migrations alongside database updates/upgrades/patches

The key strategy to tackle difficult situations, at least in the context of these blog post series, has been to split big problems into sub-problems, then resolve one sub-problem at a time.

Update vs Upgrade

Database updates and patches are released on regular basis, they are safe and do not cause major problems when the time comes to apply them. From a system maintenance perspective, it makes sense to apply patches as soon as they come out, and on a regular repeatable basis. For example, every week Friday at midnight, a task can apply a patch to the database engine. At this point, there is one issue off our plate. How about database upgrades.

Upgrades

Avoiding model versioning colliding with other database-related upgrades ~ Any upgrade has breaking changes in it, some are minor others are really serious such as data format incompatibility and what-not. Since upgrades can cause harm, it makes sense to NOT do upgrades at the same time with model versioning, or worse, data model versioning. We may state the following upgrades: ORM/ODM, database driver, database engine upgrades. Since they are not frequent, they can be planned once every quarter depending on the schedule of software we are talking about. It makes sense to have a window to execute, test, and adapt if necessary. Once a quarter as a part of sprint cleaning makes more sense. As a precaution, it makes sense to NOT plan upgrades at the same time as model version changes.

Model versioning strategies

As expressed in the sample code, the evolution of data-driven applications goes hand in hand with schema evolution. As the application grows, some decisions are going to be detrimental and may also need corrective measures in further iterations. We keep in mind that some new features require revisiting schema. In all cases, the model schema will have to change to adapt to new realities. The complexity of schema change depends on how complex the addition or removal turns out to be. To reduce complexity and technical debt, every deployment should involve steps to update schema changes, and re-hydrate data into new models to reflect the new changes. When possible, features that require schema change can be moved to a minor(Major.Minor.Patch) release, whereas every day (in continuous delivery mode) release can be just patched. Similarly, the Major version releases can include ORM/ODM upgrades, database driver upgrades, database engine upgrades, data migration from an old system to the new system. It is NOT good to include model changes in the major, we can keep that in minor releases.

Migration strategy for model upgrades with schema change

From previous sections, it makes sense to keep model upgrades, with schema change as a minor release task. And that, whether it implies data hydration or not.

Migration strategy for models upgrades with data hydration

Data hydration is necessary, in case the data structure has changed to remove fields, split fields, or adding embedded documents. Data hydration may not be necessary when schema change is relaxed validity or availability. However, if a field becomes required, then it makes sense to add a rehydration strategy. It is better to execute hydration every time there is a minor release, even when not necessary.

Tools that make model upgrade easy

There are some libraries that can be used to execute data migration/hydration as a part of model upgrade operation. node-migrate is one of them. Advanced tools on relational databases such as flywaydb can be used. When it comes to model upgrades, a consistent repeatable strategy pays more for your buck than a full-fledged solution in the wild.

Conclusion

In this article, we revisited how to align schema versioning with mongodb releases, taking into consideration data migration and hydration, as well as tools to make data handling easier. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #mongodb #mongoose #migration #data-migration #model-migration #nodejs

Scheduled tasks are hard to debug. Inherent to their asynchronous nature, bugs in scheduled tasks strike later, anything that can help prevent that behavior and curb failures ahead of time are always good to have.

Unit testing is one of the effective tools to challenge this behavior. The question we have an answer for is How to test scheduled tasks in isolation. This article introduces some techniques to do that. Using modularization techniques on scheduled background tasks, we will shift focus to making chunks of code-block accessible to testing tools.

In this article we will talk about:

  • How to define a job(task)
  • How to trigger a job(task)
  • How to modularize tasks for testability
  • How to modularize tasks for reusability
  • How to modularize tasks for composability
  • How to expose task scheduling via a RESTful API
  • Alternatives to the agenda scheduling model

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The following example shows how Job trigger can be used under an expressjs route:


//jobs/email.js
var email = require('some-lib-to-send-emails'); 
var User = require('./models/user.js');

module.exports = function(agenda) {
  agenda.define('registration email', function(job, done) {
    User.findById(job.attrs.data.userId, function(err, user) {
       if(err) return done(err);
       	var message = ['Thanks for registering ', user.name, 'more content'].join('');
      	return email(user.email, message, done);
     });
  });
  agenda.define('reset password', function(job, done) {/* ... more code*/});
  // More email related jobs
};

//route.js
//lib/controllers/user-controller.js
var app = express(),
    User = require('../models/user-model'),
    agenda = require('../worker.js');

app.post('/users', function(req, res, next) {
  var user = new User(req.body);
  user.save(function(err) {
    if(err) return next(err);
    //@todo - Schedule an email to be sent before expiration time
    //@todo - Schedule an email to be sent 24 hours
    agenda.now('registration email', { userId: user.primary() });
    return res.status(201).json(user);
  });
});

Example:

What can possibly go wrong?

When trying to figure out how to approach modularization of nodejs background jobs, the following points may be quite a challenge on their own:

  • abstraction, and/or injecting, background job library into an existing application
  • abstraction or making schedule jobs outside the application.

The following sections will explore more on making points stated above work.

How to define a job

agenda library comes with an expressive API. The interface provides two sets of utilities, one of which is .define(), and does the task definition chore. The following example illustrates this idea.

agenda.define('registration email', 
  function(metadata, done) {

});

How to trigger a job

As stated earlier, the agenda library comes with an interface to trigger a job or schedule an already defined job. The following example illustrates this idea.

agenda.now('registration email', {userId: userId});
agenda.every('3 minutes', 'delete old users');
agenda.every('1 hour', 'print analytics report');

How to modularize tasks for reusability

There is a striking similarity between event handling and task definition.

That similarity raises a whole new set of challenges, one of which turns out to be a tight coupling between task definition and the library that is expected to execute those jobs.

The refactoring technique we have been using all along is handy in the current context as well. We have to eject job definition from agenda library constructs. The next step in refactoring iteration is to inject agenda object as a dependency, whenever it is needed.

The modularization cannot end at this point, we also need to export individual jobs (task handlers) and expose those exported modules via an index file.

How to modularize tasks for testability

There challenges when mocking any object that applies to agenda instance as well.

Implementation of jobs(or task handlers) will be lost, as soon as a stub/fake is provided. The arguments stating that stubs will play well are valid, as long as independent jobs(task handlers) are tested in isolation.

To avoid the need to mock the agenda object in multiple instances, loading agenda from a dedicated library provides quite a good solution to this issue.

How to modularize tasks for composability

In these modularization series, we focused on one perspective. There is no restriction to turn the tables and see things from an opposite vantage point. We can take agenda as an injectable object. The classic approach is the one used with injecting(or mounting) app instances in a set of reusable routes(RESTful APIs).

How to expose task scheduling via a RESTful API

One of the reasons to opt for agenda for background task processing is its ability to persist jobs in a database, and resume pending jobs even after a database server shutdown, crash, or data migration from one instance to the next.

This makes it easy to integrate job processing in regular RESTful APIs. We have to remember that background tasks are mainly designed to run like cronjobs.

Alternatives to agenda scheduling model

In this article we approached job scheduling from a library perspective, agenda. agenda is certainly one of the multiple other solutions in the wild, for instance, cronjobs.

Another viable alternative is tapping into system-based solutions such as monit for Linux and systemctl for macOS.

There is a discussion on how to use nodejs to execute monit tasks in this blog and monit service poll time.

Modularization of Scheduled Tasks

Modularization of scheduled tasks requires 2 essential steps, as for any other module. The first step is to make sure the job definition and job trigger(invocation) is exportable, the same way independent functions do. The second step is to provide access to it, via index.

The next two steps help to achieve these two objectives. Before we dive into it, it worth clarifying a couple of points.

  • Tasks can be scheduled from dedicated libraries, cronjobs, and software such as monit.
  • There are a lot of libraries to choose from such as bull and bee or kue. agenda is chosen for clarification purposes.
  • Task invocation can be triggered from the socket, routes, and agenda handlers
  • Example of delayed tasks is sending an email at a given time, deleting inactive accounts, data backup, etc.

agenda uses mongodb to store job descriptions. Good choice in case the project under consideration relies on mongodb for data persistence.Example Project Structure

Conclusion

Modularization is key when crafting re-usable composable software. Scheduled tasks are not an exception to this rule. Background jobs modularization brings elegance to the codebase, reduces copy/paste instances, improves performance and testability.

In this article, we revisited how to increase background jobs more testable, by leveraging key modularization techniques. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #scheduled-jobs #nodejs

A server requires the use of network resources, some of which perform expensive read/write operations. Testing servers introduce side effects, some of which expensive, and may cause unintended consequences when not mocked in the testing phase. To limit the chances of breaking something, testing servers have to be done in isolation.

The question to ask at this stage, is How to get there?. This blog article will explore some of the ways to answer this question.

The motivation for modularization is to reduce the complexity associated with large-scale expressjs applications. In nodejs servers context, we will shift focus on making sure most of the parts are accessible for tests in isolation.

In this article we will talk about:

  • How to modularize nodejs server for reusability.
  • How to modularize nodejs server for testability.
  • How to modularize nodejs server for composability.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

nodejs application server comes in two flavors. Using native nodejs library, or adopting a server provided via a framework, in our case expressjs.

Using expressjs framework a classic server code looks as is the following example:

var express = require('express'),
    app = express()
/** .. more routes + code for app ... */
app.get('/', function (req, res) {
  return res.send('Hello World!')
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
//source: https://expressjs.com/en/starter/hello-world.html

Example:

As the requirement increases, this file becomes exponentially big. The most application runs on top of expressjs a popular library in nodejs world. To keep the server.js small, regardless of requirements and dependent modules, moving most of the code into modules makes a difference.

var http = require('http'),
  hostname = 'localhost',
  port = process.env.PORT || 3000,
  server = http.createServer(function(req, res){
    res.statusCode = 200;
    res.setHeader('Content-Type', 'text/plain');
    res.end('Hello World\n');
  });

//Alternatively
var express = require('express'),
    app = express(),
    require('app/routes')(app),
    server = http.createServer(app);

server.listen(port, hostname, function (){
  console.log(['Server running at http://',hostname,':',port].join());
});
//source: https://nodejs.org/api/synopsis.html#synopsis_example

Example:

What can possibly go wrong?

When trying to figure out how to approach modularizing nodejs servers, the following points may be a challenge:

  • Understanding where to start, and where to stop with server modularization
  • Understanding key parts that need abstraction, or how/where to inject dependencies
  • Making servers testable

The following sections will explore more on making points stated above work.

How to modularize nodejs server for reusability

How to apply modularization technique in a server context or How to break down larger server file into a smaller granular alternative.

The server reusability becomes an issue when it becomes clear that the server bootstrapping code either needs some refactoring or presents an opportunity to add extra test coverage.

In order to make the server available to the third-party sandboxed testing environment, the server has to be exportable first.

In order to be able to load and mock/stub certain areas of the server code, still the server has to be exportable.

Like any other modularization technique we used, two steps are going to be in play. Since our case concerns multiple players, for instance, expressjs WebSocket and whatnot, we have to look at the server like an equal of those other possible servers.

How to modularize nodejs server for testability

Simulations of start/stop while running tests are catalysts of this exercise.

Testability and composability are other real drives to get the server to be modular. A modular server makes it easy to load the server as we load any other object into the testing sandbox, as well as mocking any dependency we deem unnecessary or prevents us to get the job done.

Simulation of Start/Stop while running testsHow to correctly unit test express server – There is a better code structure organization, that make it easy to test, get coverage, etc. Testing nodejs with mocha

The previous example shows how simpler becomes server initialization, but that comes with the additional library to install. Modularization of the above two code segments makes it possible to test the server in isolation.

module.exports = server;

Example: Modularization – this line makes server available in our tests ~ source

How to modularize nodejs server for composability

The challenge is to expose the HTTP server, in a way redis/websocket or agenda can re-use the same server. Making the server injectable.

The composability of the server is rather counter-intuitive. In most cases, the server will be injected into other components, for those components to mount additional server capabilities. The code sample proves this point by making the HTTP server available to a WebSocket component so that the WebSocket can be aware and mounted/attached to the same instance of the HTTP server.

var http = require('http'), 
    app = require('express')(),
    server = http.createServer(app),
    sio = require("socket.io")(server);

...

module.exports = server;

Conclusion

Modularization is key in making nodejs server elegant, serve as a baseline to performance improvements and improved testability. In this article, we revisited how to achieve nodejs server modularity, with stress on testability and code reusability. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #nodejs #expressjs

We assume most of the system components to be accessible for testability. However, that is challenging when routes are a little bit complex. To reduce the complexity that comes with working on large-scale expressjs routes, we will apply a technique known as manifest routes to make route declarations change proof, making them more stable as the rest of the application evolves.

In this article we will talk about:

  • The need to have manifest routes technique
  • How to apply the manifest routes as a modularization technique

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express')
var app = express();

app.get('/', function(req, res, next) {  
  res.render('index', { title: 'Express' });
});

/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});

What can possibly go wrong?

When trying to figure out how to approach modularization of expressjs routes with a manifest route pattern, the following points may be a challenge:

  • Where to start with modularization without breaking the rest of the application
  • How to introduce the layered architecture, without incurring additional test burden, but making it easier to isolate tests

The following sections will explore more on making points stated above work.

The need to have manifest routes technique

There is a subtle nuance that is missing when following traditional approaches to modularization.

When adding an index file, as a part of the modularization process, exporting the content of directories, for that matter — sub-directories, does not result in exporting routes that can be plugged into existing expressjs applications.

The remedy is to create, isolate, export, and manifest them to the outer world.

How to apply the manifest routes handlers for reusability

The handlers are a beast in their own way.

A collection of related route handlers can be used as a baseline to create the controller layer. The modularization of this newly created/revealed layer can be achieved in two steps as was the case for other use cases. The first step consists of naming, ejecting, and exporting single functions as modules. The second step consists of adding an index to every directory and exporting the content of the directory.

Manifest routes

In essence, requiring a top-level directory, will seek for index.js at top of the directory and make all the route content accessible to the caller.

var routes = require('./routes'); 

Example: /routes has index.js at top level directory ~ source

A typical default entry point of the application:

var express = require('express');  
var router = express.Router();

router.get('/', function(req, res, next) {  
  return res.render('index', { title: 'Express' });
});
module.exports = router;  

Example: default /index entry point

Anatomy of a route handler

module.exports = function (req, res) {  };

Example: routes/users/get-user|new-user|delete-user.js

“The most elegant configuration that I've found is to turn the larger routes with lots of sub-routes into a directory instead of a single route file” – Chev source

When individual routes/users sub-directories are put together, the resulting index would look as in the following code sample

var router = require('express').Router();  
router.get('/get/:id', require('./get-user.js'));  
router.post('/new', require('./new-user.js'));  
router.post('/delete/:id', require('./delete-user.js'));  
module.exports = router;    

Example: routes/users/index.js

Update when routes/users/favorites/ adds more sub-directories

router.use('/favorites', require('./favorites')); 
...
module.exports = router;

Example: routes/users/index.js ~ after adding a new favorites requirement

We can go extra mile and group route handlers in controllers. Using route and controllers' route handler as a controller would look as in the following example:

var router = require('express').Router();
var catalogues = require('./controllers/catalogues');

router.route('/catalogues')
  .get(catalogues.getItem)
  .post(catalogues.createItem);
module.exports = router;

Conclusion

Modularization makes expressjs routes reusable, composable, and stable as the rest of the system evolves. Modularization brings elegance to route composition, improved testability, and reduces instances of redundancy.

In this article, we revisited a technique that improves expressjs routes elegance, their testability, and re-usability known under the manifest route moniker. We also re-state that the manifest route technique is an extra mile to modularizing expressjs routes. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #modularization #manifest-routes #nodejs #expressjs

Modularization of redis for testability

To take advantage of multicore systems, nodejs — being a single-threaded JavaScript runtime — spins up multiple processes to guarantee parallel processing capabilities. That works well until inter-process communication becomes an issue.

That is where key-stores such as redis come into the picture, to solve the inter-process communication problem while enhancing real-time experience.

This article is about showcasing how to achieve leverage modular design to provide testable and scalable code.

In this article we will talk about:

  • How to modularize redis clients for reusability
  • How to modularize redis clients for testability
  • How to modularize redis clients for composability
  • The need to have a redis powered pub/sub
  • Techniques to modularize redis powered pub/sub
  • The need to coupling WebSocket with redis pub/subsystem
  • How to modularize WebSocket redis communications
  • How to modularize redis configuration

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

Introducing extra components makes it hard to test a system in isolation. This example highlights some of the moving parts we will be discussing in this article:

//creating the Server -- alternative #1 
var app = express();
var server = Server(app);

//creating the Server -- alternative #2
var express = require('express'),
    app = express(),
    server = require('http').createServer(app);

//Initialization of WebSocket Server + Redis Pub/Sub    
var wss = require("socket.io")(server),
	redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
  sub = redis.createClient(rport, rhost);
  
//HTTP session middleware thing
function middleware(req, res, next){
 ...
 next();
}

//exchanging session values 
wss.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
wss.sockets.on("connection", function(socket) {
 
 //socket.request.session 
 //Now it's available from Socket.IO sockets too! Win!
 socket.on('message', (event) => {
	 var payload = JSON.parse(event.payload || event),
	 	user = socket.handshake.user || false;
	 
	 //except when coming from pub  			
	 pub.publish(payload.conversation, payload)); 
 });

 //redis listener
 sub.on('message', function(channel, event) {
	var payload = JSON.parse(event.payload || event),
		user = socket.handshake.user || false;
    wss.
      sockets.
      in(payload.conversation).
      emit('message', payload);
 });

Example:

What can possibly go wrong?

  • Having redis.createClient() everywhere, makes it hard to mock
  • creation/deletion of redis instances(pub/sub) is out of control

One way is to create One instance (preferably while loading top-level module), and inject that instance into dependent modules – Managing modularity and redis connections in nodejs. – The other way: node module loader caches loaded modules. Which provides a singleton by default.

The need to have a redis powered pub/sub

JavaScript, and nodejs in particular, is a single-threaded language — but has other ways to provide parallel computing.

It is possible to spin up any number of processes depending on application needs. The process to process communication becomes an issue, and when one process mutates the state of a shared object, for instance, any other process on the same server would have to be informed about the update.

Unfortunately, that is not feasible. pub/sub mechanisms that redis brings to the table, make it possible to solve problems similar to this one.

How to modularize redis clients for testability

pub/sub implementations make the code intimidating, especially when the time comes to test.

We assume that existing code has little to no test, and most importantly, not modularized. Or well tested, and well modularized, but the addition of real-time handling adds a need to leverage pub/sub to provide near real-time experience.

The first and easy thing to do in such a scenario is to break code blocks into smaller chunks that we can test in isolation.

  • In essence, the pub and sub are both redis clients, that have to be created independently so that they run in two separate contexts and processes. We may be tempted to use pub and sub-objects as the same client, that would be detrimental and create race conditions from the get-go.
  • Delegating pub/sub-creation to a utility function makes it possible to mock the clients.
  • The utility function should accept injected redis. It is possible to go the extra mile and delegate redis instance initialization in its own factory. That way, it becomes even easier to mock the redis instance itself.

Past these steps, other refactoring techniques can take over.

// hard to mock when located in [root]/index.js  
var redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
  sub = redis.createClient(rport, rhost);

// Easy to mock with introduction of createClient factory
// in /lib/util/redis.js|redis-helper.js
module.exports = function(redis){
    return redis.createClient(port, host);
}

How to modularize redis clients for reusability

The example provided in this article scratches the surface on what can be achieved when integrating redis into a project.

What would be the chain of events if, for some reason, redis server goes down. Would that affect the overall health and usability of the whole application?

If the answer is yes, or not sure, that gives a pretty good indication of the need to isolate usage of redis, and make sure its modularity is sound and failure-proof.

Modularization of the redis can be seen from two angles: to publish a set of events to the shared store, subscribing to the shared store for updates on events of our interest.

By making the redis integration modular, we also have to think about making sure redis server downtime/failure, does not translate into a cascading effect that may bring the application down.

//in app|server|index.js   
var client = require("redis").createClient(); 
var app = require("./lib")(client);//<- Injection

//injecting redis into a route
var createClient = require('./lib/util/redis');
module.exports = function(redis){
  return function(req, res, next){
    var redisClient = createClient(redis);
    return res.status(200).json({message: 'About Issues'});
  };
};

//usage
var getMessage = require('./')(redis);

How to modularize redis clients for composability

In the previous two sections, we have seen how pub/sub enhanced by a redis server brings near real-time experience to the program.

The problem we faced in both sections, is that redis is tightly coupled to all modules, even those that do not need to use it.

Composability becomes an issue when we need to avoid having a single point of failure in the program, as well as providing a test coverage deep enough to prevent common use cases of failures.

// in /lib/util/redis
const redis = require('redis');
module.exports = function(options){
  return options ?  {} : redis;
}

The above small factory may look a little weird, but it makes it possible to offset initialization to a third-party service and becomes possible to mock when testing.

Techniques to modularize redis powered pub/sub

The need to modularize the pub/sub code has been discussed in previous segments.

The issue we still have at this time is at pub/sub handler level. As we may have noticed already, testing pub/sub handlers is challenging especially when not having an up and running redis instance.

Modularizing that two kinds of handlers provide an opportunity to test pub/sub handlers in isolation. It also makes it possible to share the handlers with other systems that may need exactly the same kind of behavior.

The need to lose coupling WebSocket with redis pub/sub system

One example of decoupling pub/subfrom redis and make its handlers re-usable, can be seen when the WebSocket server has to leverage socket server events.

For example, on a new message read on the socket, the socket server should notify other processes that there is in fact a new message on the socket.

The pub is the right place to post this kind of notification. On a new message posted in the store, the WebSocket server may need to respond to a particular user. and so forth.

How to modularize WebSocket redis communications

There is a use case where an infinite same message can be ping-pong-ed between pub and sub.

To make sure such a thing doesn't happen, a communication protocol should be initialized. For example, when a message is published to the store by a WebSocket and the message is destined to all participating processes, a corresponding listener should read from the store and forward the message to all participating sockets, In such a way a socket that receives the message simply publishes it but does not answer to the sender right away.

Subscribed sockets, can then read from the store, and forward the message to the right receiver.

There is an entire blog dedicated to modularizing nodejs WebSockets here

How modularize redis configuration

The need to configure a server comes not only for redis server but also for any other server or service.

In this particular instance, we will see how we can include redis configuration into an independent module that can then be used with the rest of the configurations.

//from the example above 
const redis = require("redis"); 
const port = process.ENV.REDIS_PORT || "6379";
const host = process.ENV.REDIS_HOST || "127.0.0.1";
module.exports = redis.createClient(port, host);

//abstracting configurations in lib/configs
module.exports = Object.freeze({ 
  redis: {
    port: process.ENV.REDIS_PORT || "6379",
    port: process.ENV.REDIS_HOST || "127.0.0.1"
  }
});

//using an abstracted configurations
const configs = require('./lib/configs');
module.exports = redis.createClient(
  configs.redis.port, 
  configs.redis.host
);

This strategy to rethink, application structure has been found here

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings not only elegance but makes copy/paste detectors happy, and at the same time improves both performance and testability.

In this article, we revisited how to aggregate WebSocket code into composable and testable modules. The need to group related tasks into modules involves the ability to add support of Pub/Sub on demand and using various solutions as project requirements evolve. There are additional complimentary materials in the “Testing nodejs applications” book.

References + Reading List

tags: #snippets #redis #nodejs #modularization

Systems monitoring is critical to systems deployed at scale. In addition to traditional monitoring services native to the nodejs ecosystem, this article explores how to monitor nodejs applications using third-party systems in a way that covers the entire stack, and provides an overall state in one bird eye view.

In this article we will talk about:

  • Data collection tools
  • Data visualization tools
  • Self-healing nodejs systems
  • Popular monitoring stacks

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Monitoring

Monitoring, custom alerts, and notifications systems

Monitoring overall system health makes it possible to take immediate action when something unexpected happens. Key metrics to looks at are CPU usage, memory availability, disk capacity, and health and software errors.

Monitoring systems makes it easy to detect, identify and eventually repair or recover from a failure in a reasonable time. When monitoring production applications, the aim is to be quick to respond to incidents. Sometimes, incident resolution can also be automated: a notification system that actually triggers some sort of script execution to remediate known issues. This sort of system is also called self-healing systems.

Monitoring goes hand-in-hand with notification ~ alerting the right systems and people about either about what is about to happen(early or predictive detection), or about what just happened(near real-time detection) ~ so that the remediation action can be taken. We talk about self-healing(or resilient) systems when the system under stress makes remediation on its own, automatically — and without direct human intervention.

Complex monitoring systems are available for free and for a fee, open as well as closed source. The following examples provide a couple of some to look into.

It is a good idea to use a monitoring tool outside the application. This strategy bails out when downtime originates from an entire data center or the same rack of servers. However, monitoring tools deployed on the same server, have the advantage of better taking the pulse of the environment on which the application is deployed. A winning strategy is deploying both solutions so that notifications can go out even when an entire data center has a downtime.

Conclusion

In this article, we revisited how to achieve one bird-eye view of full-stack nodejs application monitoring using third-party systems. We highlighted how logging and monitoring complement each other. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#monitoring #nodejs #data-collection #visualization #data-viz

Access to servers via cloud infrastructure raised the bar of what can be achieved by leveraging third-party computing power. One area amongst multiple others is the possibility to centralize code repository and development environments in the cloud.

This blog is a collection of resources until additional content lands in it

In this article we will talk about:

  • Leveraging third-party services for front end development
  • Leveraging cloud-native IDE for backend development
  • Deep integration of github with cloud-native IDEs
  • The code to move the development to the cloud
  • Services available for cloud development
  • Remote debugging using tunneling

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Cloud IDE

Unlike the front-end dev environment, backend code IDEA is a little tricky. Requirements of backend code are a little different from the front end, and sometimes involve a lot of moving parts. Things such as databases, authentication, payment processing systems, etc require special attention when developing code.

The following are some serious contenders to look into when wanting to move a part of backend code completely to the cloud environment.

Front End

Cloud IDE, especially on the front-end side, is getting a little more serious. Not only they remove hustle to deal with environment setup, but they also make it possible to demo end results in real-time. There is an increased capability to ship the code as early as possible. There is a myriad of those, but these two stand out.

Databases

Tunneling for remote debugging

It is quite a challenge to debug certain things, the WebHook from a live server is one of them. The following tool makes it possible to test those. It would be even easier if the development environment was entirely cloud-powered.

Miscellaneous

Conclusion

In this article, we reviewed possibilities to move development to the cloud, the cost associate with the move, and cases when that move makes sense. There are no additional complementary materials in the “Testing nodejs applications” book, but can be a good start to centralize testing efforts.

References

#cloud #nodejs #github #cloud-idea