Simple Engineering

The idea to write this post stems from reading not-to-clear user stories from various sources and projects. Instead of prescribing what is right, or wrong, we will take a different turn and focus on asking questions about the nature of user stories.

If you don't already know, this article complements “How to write Test Cases developers will love”

At the end of the reading, you will have inspirations on how to make key improvements on existing user stories, or how to make them readable and easy to digest by developers who will be reading and implementing them.

In this article we will talk about:

  • Choosing the right user story based on ” As —, I want to —, so that —”
  • Choosing the right user story template based on GivenWhenThen
  • Choosing between user story and job story
  • The only template job story will need “When —, I want to —, So I can —”
  • Choosing verifiable acceptance criteria by developers and stakeholders

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing `nodejs` Applications Book Cover

What to expect from a user story

When writing a user story, we will have to answer thoroughly following questions, yet in simple terms. The final product has to be mold our user stories have to fit in or template our user stories have to follow.

  • Why would developers hate reading our User Stories (hint: they don't, they hate the way user stories are written)
  • What should go into a User Story
  • What should not go into a User Story
  • Why does good messaging matter in our particular case. What is the definition of good messaging in our context?
  • How can we leverage acceptance criteria as a vector to reduce the bug count
  • What format are User Stories going to adopt
  • Or, in known User Story formats(JTBD/etc) which format makes sense for our use case
  • Why it is a good idea to give a say to developers when crafting a User Story
  • Why is it a bad idea to give developers a say when crafting or validating User Stories
  • How do we measure the performance of our new User Story template.

There is always a starting point. Instead of starting from nothing to answer these questions, It would make more sense to craft a typical user story(real user story), and test and answer questions based on a tangible example.

All criticism, or answers, should be targeted at making the user story at hand a little better.

Conclusion

In this article, we revisited how to write user stories that convey a clear message on what has to be done, in a way that developers may re-use the acceptance criteria as their test cases. As always, there are additional complementary materials in the “Testing nodejs applications” book that may be of your interest to.

References

#user-stories #bugs #bug-report #QA

Scheduled tasks are hard to debug. Inherent to their asynchronous nature, bugs in scheduled tasks strike later, anything that can help prevent that behavior and curb failures ahead of time are always good to have.

Unit testing is one of the effective tools to challenge this behavior. The question we have an answer for is How to test scheduled tasks in isolation. This article introduces some techniques to do that. Using modularization techniques on scheduled background tasks, we will shift focus to making chunks of code-block accessible to testing tools.

In this article we will talk about:

  • How to define a job(task)
  • How to trigger a job(task)
  • How to modularize tasks for testability
  • How to modularize tasks for reusability
  • How to modularize tasks for composability
  • How to expose task scheduling via a RESTful API
  • Alternatives to the agenda scheduling model

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

The following example shows how Job trigger can be used under an expressjs route:


//jobs/email.js
var email = require('some-lib-to-send-emails'); 
var User = require('./models/user.js');

module.exports = function(agenda) {
  agenda.define('registration email', function(job, done) {
    User.findById(job.attrs.data.userId, function(err, user) {
       if(err) return done(err);
       	var message = ['Thanks for registering ', user.name, 'more content'].join('');
      	return email(user.email, message, done);
     });
  });
  agenda.define('reset password', function(job, done) {/* ... more code*/});
  // More email related jobs
};

//route.js
//lib/controllers/user-controller.js
var app = express(),
    User = require('../models/user-model'),
    agenda = require('../worker.js');

app.post('/users', function(req, res, next) {
  var user = new User(req.body);
  user.save(function(err) {
    if(err) return next(err);
    //@todo - Schedule an email to be sent before expiration time
    //@todo - Schedule an email to be sent 24 hours
    agenda.now('registration email', { userId: user.primary() });
    return res.status(201).json(user);
  });
});

Example:

What can possibly go wrong?

When trying to figure out how to approach modularization of nodejs background jobs, the following points may be quite a challenge on their own:

  • abstraction, and/or injecting, background job library into an existing application
  • abstraction or making schedule jobs outside the application.

The following sections will explore more on making points stated above work.

How to define a job

agenda library comes with an expressive API. The interface provides two sets of utilities, one of which is .define(), and does the task definition chore. The following example illustrates this idea.

agenda.define('registration email', 
  function(metadata, done) {

});

How to trigger a job

As stated earlier, the agenda library comes with an interface to trigger a job or schedule an already defined job. The following example illustrates this idea.

agenda.now('registration email', {userId: userId});
agenda.every('3 minutes', 'delete old users');
agenda.every('1 hour', 'print analytics report');

How to modularize tasks for reusability

There is a striking similarity between event handling and task definition.

That similarity raises a whole new set of challenges, one of which turns out to be a tight coupling between task definition and the library that is expected to execute those jobs.

The refactoring technique we have been using all along is handy in the current context as well. We have to eject job definition from agenda library constructs. The next step in refactoring iteration is to inject agenda object as a dependency, whenever it is needed.

The modularization cannot end at this point, we also need to export individual jobs (task handlers) and expose those exported modules via an index file.

How to modularize tasks for testability

There challenges when mocking any object that applies to agenda instance as well.

Implementation of jobs(or task handlers) will be lost, as soon as a stub/fake is provided. The arguments stating that stubs will play well are valid, as long as independent jobs(task handlers) are tested in isolation.

To avoid the need to mock the agenda object in multiple instances, loading agenda from a dedicated library provides quite a good solution to this issue.

How to modularize tasks for composability

In these modularization series, we focused on one perspective. There is no restriction to turn the tables and see things from an opposite vantage point. We can take agenda as an injectable object. The classic approach is the one used with injecting(or mounting) app instances in a set of reusable routes(RESTful APIs).

How to expose task scheduling via a RESTful API

One of the reasons to opt for agenda for background task processing is its ability to persist jobs in a database, and resume pending jobs even after a database server shutdown, crash, or data migration from one instance to the next.

This makes it easy to integrate job processing in regular RESTful APIs. We have to remember that background tasks are mainly designed to run like cronjobs.

Alternatives to agenda scheduling model

In this article we approached job scheduling from a library perspective, agenda. agenda is certainly one of the multiple other solutions in the wild, for instance, cronjobs.

Another viable alternative is tapping into system-based solutions such as monit for Linux and systemctl for macOS.

There is a discussion on how to use nodejs to execute monit tasks in this blog and monit service poll time.

Modularization of Scheduled Tasks

Modularization of scheduled tasks requires 2 essential steps, as for any other module. The first step is to make sure the job definition and job trigger(invocation) is exportable, the same way independent functions do. The second step is to provide access to it, via index.

The next two steps help to achieve these two objectives. Before we dive into it, it worth clarifying a couple of points.

  • Tasks can be scheduled from dedicated libraries, cronjobs, and software such as monit.
  • There are a lot of libraries to choose from such as bull and bee or kue. agenda is chosen for clarification purposes.
  • Task invocation can be triggered from the socket, routes, and agenda handlers
  • Example of delayed tasks is sending an email at a given time, deleting inactive accounts, data backup, etc.

agenda uses mongodb to store job descriptions. Good choice in case the project under consideration relies on mongodb for data persistence.Example Project Structure

Conclusion

Modularization is key when crafting re-usable composable software. Scheduled tasks are not an exception to this rule. Background jobs modularization brings elegance to the codebase, reduces copy/paste instances, improves performance and testability.

In this article, we revisited how to increase background jobs more testable, by leveraging key modularization techniques. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #scheduled-jobs #nodejs

A server requires the use of network resources, some of which perform expensive read/write operations. Testing servers introduce side effects, some of which expensive, and may cause unintended consequences when not mocked in the testing phase. To limit the chances of breaking something, testing servers have to be done in isolation.

The question to ask at this stage, is How to get there?. This blog article will explore some of the ways to answer this question.

The motivation for modularization is to reduce the complexity associated with large-scale expressjs applications. In nodejs servers context, we will shift focus on making sure most of the parts are accessible for tests in isolation.

In this article we will talk about:

  • How to modularize nodejs server for reusability.
  • How to modularize nodejs server for testability.
  • How to modularize nodejs server for composability.

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

nodejs application server comes in two flavors. Using native nodejs library, or adopting a server provided via a framework, in our case expressjs.

Using expressjs framework a classic server code looks as is the following example:

var express = require('express'),
    app = express()
/** .. more routes + code for app ... */
app.get('/', function (req, res) {
  return res.send('Hello World!')
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
//source: https://expressjs.com/en/starter/hello-world.html

Example:

As the requirement increases, this file becomes exponentially big. The most application runs on top of expressjs a popular library in nodejs world. To keep the server.js small, regardless of requirements and dependent modules, moving most of the code into modules makes a difference.

var http = require('http'),
  hostname = 'localhost',
  port = process.env.PORT || 3000,
  server = http.createServer(function(req, res){
    res.statusCode = 200;
    res.setHeader('Content-Type', 'text/plain');
    res.end('Hello World\n');
  });

//Alternatively
var express = require('express'),
    app = express(),
    require('app/routes')(app),
    server = http.createServer(app);

server.listen(port, hostname, function (){
  console.log(['Server running at http://',hostname,':',port].join());
});
//source: https://nodejs.org/api/synopsis.html#synopsis_example

Example:

What can possibly go wrong?

When trying to figure out how to approach modularizing nodejs servers, the following points may be a challenge:

  • Understanding where to start, and where to stop with server modularization
  • Understanding key parts that need abstraction, or how/where to inject dependencies
  • Making servers testable

The following sections will explore more on making points stated above work.

How to modularize nodejs server for reusability

How to apply modularization technique in a server context or How to break down larger server file into a smaller granular alternative.

The server reusability becomes an issue when it becomes clear that the server bootstrapping code either needs some refactoring or presents an opportunity to add extra test coverage.

In order to make the server available to the third-party sandboxed testing environment, the server has to be exportable first.

In order to be able to load and mock/stub certain areas of the server code, still the server has to be exportable.

Like any other modularization technique we used, two steps are going to be in play. Since our case concerns multiple players, for instance, expressjs WebSocket and whatnot, we have to look at the server like an equal of those other possible servers.

How to modularize nodejs server for testability

Simulations of start/stop while running tests are catalysts of this exercise.

Testability and composability are other real drives to get the server to be modular. A modular server makes it easy to load the server as we load any other object into the testing sandbox, as well as mocking any dependency we deem unnecessary or prevents us to get the job done.

Simulation of Start/Stop while running testsHow to correctly unit test express server – There is a better code structure organization, that make it easy to test, get coverage, etc. Testing nodejs with mocha

The previous example shows how simpler becomes server initialization, but that comes with the additional library to install. Modularization of the above two code segments makes it possible to test the server in isolation.

module.exports = server;

Example: Modularization – this line makes server available in our tests ~ source

How to modularize nodejs server for composability

The challenge is to expose the HTTP server, in a way redis/websocket or agenda can re-use the same server. Making the server injectable.

The composability of the server is rather counter-intuitive. In most cases, the server will be injected into other components, for those components to mount additional server capabilities. The code sample proves this point by making the HTTP server available to a WebSocket component so that the WebSocket can be aware and mounted/attached to the same instance of the HTTP server.

var http = require('http'), 
    app = require('express')(),
    server = http.createServer(app),
    sio = require("socket.io")(server);

...

module.exports = server;

Conclusion

Modularization is key in making nodejs server elegant, serve as a baseline to performance improvements and improved testability. In this article, we revisited how to achieve nodejs server modularity, with stress on testability and code reusability. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #modularization #nodejs #expressjs

We assume most of the system components to be accessible for testability. However, that is challenging when routes are a little bit complex. To reduce the complexity that comes with working on large-scale expressjs routes, we will apply a technique known as manifest routes to make route declarations change proof, making them more stable as the rest of the application evolves.

In this article we will talk about:

  • The need to have manifest routes technique
  • How to apply the manifest routes as a modularization technique

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

var express = require('express')
var app = express();

app.get('/', function(req, res, next) {  
  res.render('index', { title: 'Express' });
});

/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});

What can possibly go wrong?

When trying to figure out how to approach modularization of expressjs routes with a manifest route pattern, the following points may be a challenge:

  • Where to start with modularization without breaking the rest of the application
  • How to introduce the layered architecture, without incurring additional test burden, but making it easier to isolate tests

The following sections will explore more on making points stated above work.

The need to have manifest routes technique

There is a subtle nuance that is missing when following traditional approaches to modularization.

When adding an index file, as a part of the modularization process, exporting the content of directories, for that matter — sub-directories, does not result in exporting routes that can be plugged into existing expressjs applications.

The remedy is to create, isolate, export, and manifest them to the outer world.

How to apply the manifest routes handlers for reusability

The handlers are a beast in their own way.

A collection of related route handlers can be used as a baseline to create the controller layer. The modularization of this newly created/revealed layer can be achieved in two steps as was the case for other use cases. The first step consists of naming, ejecting, and exporting single functions as modules. The second step consists of adding an index to every directory and exporting the content of the directory.

Manifest routes

In essence, requiring a top-level directory, will seek for index.js at top of the directory and make all the route content accessible to the caller.

var routes = require('./routes'); 

Example: /routes has index.js at top level directory ~ source

A typical default entry point of the application:

var express = require('express');  
var router = express.Router();

router.get('/', function(req, res, next) {  
  return res.render('index', { title: 'Express' });
});
module.exports = router;  

Example: default /index entry point

Anatomy of a route handler

module.exports = function (req, res) {  };

Example: routes/users/get-user|new-user|delete-user.js

“The most elegant configuration that I've found is to turn the larger routes with lots of sub-routes into a directory instead of a single route file” – Chev source

When individual routes/users sub-directories are put together, the resulting index would look as in the following code sample

var router = require('express').Router();  
router.get('/get/:id', require('./get-user.js'));  
router.post('/new', require('./new-user.js'));  
router.post('/delete/:id', require('./delete-user.js'));  
module.exports = router;    

Example: routes/users/index.js

Update when routes/users/favorites/ adds more sub-directories

router.use('/favorites', require('./favorites')); 
...
module.exports = router;

Example: routes/users/index.js ~ after adding a new favorites requirement

We can go extra mile and group route handlers in controllers. Using route and controllers' route handler as a controller would look as in the following example:

var router = require('express').Router();
var catalogues = require('./controllers/catalogues');

router.route('/catalogues')
  .get(catalogues.getItem)
  .post(catalogues.createItem);
module.exports = router;

Conclusion

Modularization makes expressjs routes reusable, composable, and stable as the rest of the system evolves. Modularization brings elegance to route composition, improved testability, and reduces instances of redundancy.

In this article, we revisited a technique that improves expressjs routes elegance, their testability, and re-usability known under the manifest route moniker. We also re-state that the manifest route technique is an extra mile to modularizing expressjs routes. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #modularization #manifest-routes #nodejs #expressjs

divide et impera

One of the key issues working with large-scale nodejs applications is the management of complexity. Modularization shifts focus to transform the codebase into reusable, easy-to-test modules. This article explores some techniques used to achieve that.

This article is more theoretical, “How to make nodejs applications modular” may help with that is more technical.

In this article we will talk about:

  • Exploration of modularization techniques available within the ecosystem
  • Leveraging module.exports or import/export utilities to achieve modularity
  • Using the index file to achieve modularity
  • How above techniques can be applied at scale

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

This piece of code is going to go through modularization in “How to make nodejs applications modular” blog. As for now, we will highlight failures and points of interest down below.

var express = require('express');
var app = express();

/**Data Layer*/
var mongodb = require("mongodb");
mongoose.connect('mongodb://localhost:27017/devdb');
var User = require('./models').User; 

/**
 * Essential Middelewares 
 */
app.use(express.logger());
app.use(express.cookieParser());
app.use(express.session({ secret: 'angrybirds' }));
app.use(express.bodyParser());
app.use((req, res, next) => { /** Adding CORS support here */ });

app.use((req, res) => res.sendFile(path.normalize(path.join(__dirname, 'index.html'))));


/** .. more routes + code for app ... */
app.get('/', function (req, res) {
  return res.send('Hello World!');
});


/** code that initialize everything, then comes this route*/
app.get('/users/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});

/**
 * More code, more time, more developers 
 * Then you realize that you actually need:
 */ 
app.get('/admin/:id', function(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
});
/**
 * This would work just fine, but we may also have a requirement to listen to Twitter changes 
app.listen(port, function () {
  console.log('Example app listening on port 3000!')
});
*/

var server = require('http').createServer(app);
server.listen(app.get('port'), () => console.log(`Listening on ${ process.env.PORT || 8080 }`));
var wss = require('socket.io')(server);
//Handling realtime data
wss.on('connection'(socket, event) => {
    socket.on('error', () => {});
    socket.on('pong', () => {});
    socket.on('disconnect', () => {});
    socket.on('message', () => {});
});

Example:

What can possibly go wrong?

When trying to navigate strategies around modularization of nodejs applications, the following points may be a challenge:

  • Where to start with modularization
  • How to choose the right modularization technique.

The following sections will explore more on making points stated above work.

Modules

In nodejs context, anything from a variable to function, to classes, or an entire library qualifies to become modules.

A module can be seen as an independent piece of code dedicated to doing one and only one task at a time. The amalgamation of multiple tasks under one abstract task, or one unit of work, is also good module candidates. To sum up, modules come in function, objects, classes, configuration metadata, initialization data, servers, etc.

Modularization is one of the techniques used to break down a large software into smaller malleable, more manageable components. In this context, a module is treated as the smallest independent composable piece of software, that does only one task. Testing such a unit in isolation becomes relatively easy. Since it is a composable unit, integrating it into another system becomes a breeze.

Leveraging exports

To make a unit of work a module, nodejs exposes import/export, or module.exports/require, utilities. Therefore, modularization is achieved by leveraging the power of module.exports in ES5, equivalent to export in ES7+. With that idea, the question to “Where to start with modularization?” becomes workable.

Every function, object, class, configuration metadata, initialization data, or the server that can be exported, has to be exported. That is how Leveraging module.exports or import/export utilities to achieve modularity looks like.

After each individual entity becomes exportable, there is a small enhancement that can make importing the entire library, or modules, a bit easier. Depending on project structure, be feature-based or kind-based.

At this point, we may ask ourselves if the technique explained above can indeed scale. Simply put, Can the techniques explained above scale?

The large aspect of large scale application combines Lines of Code(20k+ LoC), number of features, third party integrations, and the number of people contributing to the project. Since these parameters are not mutually exclusive, a one-person project can also be large scale, it has to have fairly large lines of code involved or a sizable amount of third-party integrations.

nodejs applications, as a matter of fact like any application stack, tend to be big and hard to maintain past a threshold. There is no better strategy to manage complexity than breaking down big components into small manageable chunks.

Large codebases tend to be hard to test, therefore hard to maintain, compared to their smaller counterparts. Obviously, nodejs applications are no exception to this.

Leveraging the index

Using the index file at every directory level makes it possible to load modules from a single instruction. Modules at this point in time, are supposed to be equivalent or hosted in the same directory. Directories can mirror categories(kind) or features, or a mixture of both. Adding the index file at every level makes sure we establish control over divided entities, aka divide and conquer.

Divide and conquer is one of the old Roman Empire Army techniques to manage complexity. Dividing a big problem into smaller manageable ones, allowed the Roman Army to conquer, maintain and administer a large chunk of the known world in middle age.

Scalability

How the above techniques can be applied at scale

The last question in this series would be to know if the above-described approach can scale. First, the key to scalability is to build things that do not scale first. Then when scalability becomes a concern, figure out how to address those concerns. So, the first iteration would be supposed to not be scalable.

Since the index is available to every directory, and the index role becomes to expose directory content to the outer world, it doe not matter if the directory count yields 1 or 100 or 1000+. A simple call to the parent directory makes it possible to have access to 1, 100, or 1000+ libraries.

From this vantage point, introduction of the index at every level of the directory comes with scalability as a “cherry on top of the cake”.

Where to go from here

This post focused on the theoretical side of the modularization business. The next step is to put techniques described therein put to test in the next blog post.

Conclusion

Modularization is a key strategy to crafting reusable composable software components. It brings elegance to the codebase, reduces copy/paste occurrences(DRY), improves performance, and makes the codebase testable. Modularization reduces the complexity associated with large-scale nodejs applications.

In this article, we revisited how to increase key layers testability, by leveraging basic modularization techniques. Techniques discussed in this article, are applicable to other aspects of the nodejs application. There are additional complimentary materials in the “Testing nodejs applications” book.

References

tags: #snippets #code #annotations #question #discuss

Modularization of redis for testability

To take advantage of multicore systems, nodejs — being a single-threaded JavaScript runtime — spins up multiple processes to guarantee parallel processing capabilities. That works well until inter-process communication becomes an issue.

That is where key-stores such as redis come into the picture, to solve the inter-process communication problem while enhancing real-time experience.

This article is about showcasing how to achieve leverage modular design to provide testable and scalable code.

In this article we will talk about:

  • How to modularize redis clients for reusability
  • How to modularize redis clients for testability
  • How to modularize redis clients for composability
  • The need to have a redis powered pub/sub
  • Techniques to modularize redis powered pub/sub
  • The need to coupling WebSocket with redis pub/subsystem
  • How to modularize WebSocket redis communications
  • How to modularize redis configuration

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Show me the code

Introducing extra components makes it hard to test a system in isolation. This example highlights some of the moving parts we will be discussing in this article:

//creating the Server -- alternative #1 
var app = express();
var server = Server(app);

//creating the Server -- alternative #2
var express = require('express'),
    app = express(),
    server = require('http').createServer(app);

//Initialization of WebSocket Server + Redis Pub/Sub    
var wss = require("socket.io")(server),
	redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
  sub = redis.createClient(rport, rhost);
  
//HTTP session middleware thing
function middleware(req, res, next){
 ...
 next();
}

//exchanging session values 
wss.use(function(socket, next){
 	middleware(socket.request, socket.request.res, next);
});

//express uses middleware for session management
app.use(middleware);
    
//somewhere
wss.sockets.on("connection", function(socket) {
 
 //socket.request.session 
 //Now it's available from Socket.IO sockets too! Win!
 socket.on('message', (event) => {
	 var payload = JSON.parse(event.payload || event),
	 	user = socket.handshake.user || false;
	 
	 //except when coming from pub  			
	 pub.publish(payload.conversation, payload)); 
 });

 //redis listener
 sub.on('message', function(channel, event) {
	var payload = JSON.parse(event.payload || event),
		user = socket.handshake.user || false;
    wss.
      sockets.
      in(payload.conversation).
      emit('message', payload);
 });

Example:

What can possibly go wrong?

  • Having redis.createClient() everywhere, makes it hard to mock
  • creation/deletion of redis instances(pub/sub) is out of control

One way is to create One instance (preferably while loading top-level module), and inject that instance into dependent modules – Managing modularity and redis connections in nodejs. – The other way: node module loader caches loaded modules. Which provides a singleton by default.

The need to have a redis powered pub/sub

JavaScript, and nodejs in particular, is a single-threaded language — but has other ways to provide parallel computing.

It is possible to spin up any number of processes depending on application needs. The process to process communication becomes an issue, and when one process mutates the state of a shared object, for instance, any other process on the same server would have to be informed about the update.

Unfortunately, that is not feasible. pub/sub mechanisms that redis brings to the table, make it possible to solve problems similar to this one.

How to modularize redis clients for testability

pub/sub implementations make the code intimidating, especially when the time comes to test.

We assume that existing code has little to no test, and most importantly, not modularized. Or well tested, and well modularized, but the addition of real-time handling adds a need to leverage pub/sub to provide near real-time experience.

The first and easy thing to do in such a scenario is to break code blocks into smaller chunks that we can test in isolation.

  • In essence, the pub and sub are both redis clients, that have to be created independently so that they run in two separate contexts and processes. We may be tempted to use pub and sub-objects as the same client, that would be detrimental and create race conditions from the get-go.
  • Delegating pub/sub-creation to a utility function makes it possible to mock the clients.
  • The utility function should accept injected redis. It is possible to go the extra mile and delegate redis instance initialization in its own factory. That way, it becomes even easier to mock the redis instance itself.

Past these steps, other refactoring techniques can take over.

// hard to mock when located in [root]/index.js  
var redis = require('redis'), 
	rhost = process.env.REDIS_HOST,
	rport = process.env.REDIS_PORT,
	pub = redis.createClient(rport, rhost), 
  sub = redis.createClient(rport, rhost);

// Easy to mock with introduction of createClient factory
// in /lib/util/redis.js|redis-helper.js
module.exports = function(redis){
    return redis.createClient(port, host);
}

How to modularize redis clients for reusability

The example provided in this article scratches the surface on what can be achieved when integrating redis into a project.

What would be the chain of events if, for some reason, redis server goes down. Would that affect the overall health and usability of the whole application?

If the answer is yes, or not sure, that gives a pretty good indication of the need to isolate usage of redis, and make sure its modularity is sound and failure-proof.

Modularization of the redis can be seen from two angles: to publish a set of events to the shared store, subscribing to the shared store for updates on events of our interest.

By making the redis integration modular, we also have to think about making sure redis server downtime/failure, does not translate into a cascading effect that may bring the application down.

//in app|server|index.js   
var client = require("redis").createClient(); 
var app = require("./lib")(client);//<- Injection

//injecting redis into a route
var createClient = require('./lib/util/redis');
module.exports = function(redis){
  return function(req, res, next){
    var redisClient = createClient(redis);
    return res.status(200).json({message: 'About Issues'});
  };
};

//usage
var getMessage = require('./')(redis);

How to modularize redis clients for composability

In the previous two sections, we have seen how pub/sub enhanced by a redis server brings near real-time experience to the program.

The problem we faced in both sections, is that redis is tightly coupled to all modules, even those that do not need to use it.

Composability becomes an issue when we need to avoid having a single point of failure in the program, as well as providing a test coverage deep enough to prevent common use cases of failures.

// in /lib/util/redis
const redis = require('redis');
module.exports = function(options){
  return options ?  {} : redis;
}

The above small factory may look a little weird, but it makes it possible to offset initialization to a third-party service and becomes possible to mock when testing.

Techniques to modularize redis powered pub/sub

The need to modularize the pub/sub code has been discussed in previous segments.

The issue we still have at this time is at pub/sub handler level. As we may have noticed already, testing pub/sub handlers is challenging especially when not having an up and running redis instance.

Modularizing that two kinds of handlers provide an opportunity to test pub/sub handlers in isolation. It also makes it possible to share the handlers with other systems that may need exactly the same kind of behavior.

The need to lose coupling WebSocket with redis pub/sub system

One example of decoupling pub/subfrom redis and make its handlers re-usable, can be seen when the WebSocket server has to leverage socket server events.

For example, on a new message read on the socket, the socket server should notify other processes that there is in fact a new message on the socket.

The pub is the right place to post this kind of notification. On a new message posted in the store, the WebSocket server may need to respond to a particular user. and so forth.

How to modularize WebSocket redis communications

There is a use case where an infinite same message can be ping-pong-ed between pub and sub.

To make sure such a thing doesn't happen, a communication protocol should be initialized. For example, when a message is published to the store by a WebSocket and the message is destined to all participating processes, a corresponding listener should read from the store and forward the message to all participating sockets, In such a way a socket that receives the message simply publishes it but does not answer to the sender right away.

Subscribed sockets, can then read from the store, and forward the message to the right receiver.

There is an entire blog dedicated to modularizing nodejs WebSockets here

How modularize redis configuration

The need to configure a server comes not only for redis server but also for any other server or service.

In this particular instance, we will see how we can include redis configuration into an independent module that can then be used with the rest of the configurations.

//from the example above 
const redis = require("redis"); 
const port = process.ENV.REDIS_PORT || "6379";
const host = process.ENV.REDIS_HOST || "127.0.0.1";
module.exports = redis.createClient(port, host);

//abstracting configurations in lib/configs
module.exports = Object.freeze({ 
  redis: {
    port: process.ENV.REDIS_PORT || "6379",
    port: process.ENV.REDIS_HOST || "127.0.0.1"
  }
});

//using an abstracted configurations
const configs = require('./lib/configs');
module.exports = redis.createClient(
  configs.redis.port, 
  configs.redis.host
);

This strategy to rethink, application structure has been found here

Conclusion

Modularization is a key strategy in crafting re-usable composable software. Modularization brings not only elegance but makes copy/paste detectors happy, and at the same time improves both performance and testability.

In this article, we revisited how to aggregate WebSocket code into composable and testable modules. The need to group related tasks into modules involves the ability to add support of Pub/Sub on demand and using various solutions as project requirements evolve. There are additional complimentary materials in the “Testing nodejs applications” book.

References + Reading List

tags: #snippets #redis #nodejs #modularization

Systems monitoring is critical to systems deployed at scale. In addition to traditional monitoring services native to the nodejs ecosystem, this article explores how to monitor nodejs applications using third-party systems in a way that covers the entire stack, and provides an overall state in one bird eye view.

In this article we will talk about:

  • Data collection tools
  • Data visualization tools
  • Self-healing nodejs systems
  • Popular monitoring stacks

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Monitoring

Monitoring, custom alerts, and notifications systems

Monitoring overall system health makes it possible to take immediate action when something unexpected happens. Key metrics to looks at are CPU usage, memory availability, disk capacity, and health and software errors.

Monitoring systems makes it easy to detect, identify and eventually repair or recover from a failure in a reasonable time. When monitoring production applications, the aim is to be quick to respond to incidents. Sometimes, incident resolution can also be automated: a notification system that actually triggers some sort of script execution to remediate known issues. This sort of system is also called self-healing systems.

Monitoring goes hand-in-hand with notification ~ alerting the right systems and people about either about what is about to happen(early or predictive detection), or about what just happened(near real-time detection) ~ so that the remediation action can be taken. We talk about self-healing(or resilient) systems when the system under stress makes remediation on its own, automatically — and without direct human intervention.

Complex monitoring systems are available for free and for a fee, open as well as closed source. The following examples provide a couple of some to look into.

It is a good idea to use a monitoring tool outside the application. This strategy bails out when downtime originates from an entire data center or the same rack of servers. However, monitoring tools deployed on the same server, have the advantage of better taking the pulse of the environment on which the application is deployed. A winning strategy is deploying both solutions so that notifications can go out even when an entire data center has a downtime.

Conclusion

In this article, we revisited how to achieve one bird-eye view of full-stack nodejs application monitoring using third-party systems. We highlighted how logging and monitoring complement each other. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#monitoring #nodejs #data-collection #visualization #data-viz

Access to servers via cloud infrastructure raised the bar of what can be achieved by leveraging third-party computing power. One area amongst multiple others is the possibility to centralize code repository and development environments in the cloud.

This blog is a collection of resources until additional content lands in it

In this article we will talk about:

  • Leveraging third-party services for front end development
  • Leveraging cloud-native IDE for backend development
  • Deep integration of github with cloud-native IDEs
  • The code to move the development to the cloud
  • Services available for cloud development
  • Remote debugging using tunneling

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Cloud IDE

Unlike the front-end dev environment, backend code IDEA is a little tricky. Requirements of backend code are a little different from the front end, and sometimes involve a lot of moving parts. Things such as databases, authentication, payment processing systems, etc require special attention when developing code.

The following are some serious contenders to look into when wanting to move a part of backend code completely to the cloud environment.

Front End

Cloud IDE, especially on the front-end side, is getting a little more serious. Not only they remove hustle to deal with environment setup, but they also make it possible to demo end results in real-time. There is an increased capability to ship the code as early as possible. There is a myriad of those, but these two stand out.

Databases

Tunneling for remote debugging

It is quite a challenge to debug certain things, the WebHook from a live server is one of them. The following tool makes it possible to test those. It would be even easier if the development environment was entirely cloud-powered.

Miscellaneous

Conclusion

In this article, we reviewed possibilities to move development to the cloud, the cost associate with the move, and cases when that move makes sense. There are no additional complementary materials in the “Testing nodejs applications” book, but can be a good start to centralize testing efforts.

References

#cloud #nodejs #github #cloud-idea

Is it possible to use one instance of nginx, to serve as a reverse proxy of multiple application servers running on dedicated different IP addresses, under the same domain umbrella?

This article is about pointing in direction on how to achieve that.

Spoiler: It is possible to run nginx server both as a reverse proxy and load balancer.

In this article we will talk about:

  • Configure nginx as a nodejs reverse-proxy server
  • Proxy multiple IP addresses under the same banner: load balancer

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Installation

The magic happens at the upstream nodeapps section. This configuration plays the gateway role and makes public a server that was otherwise private.


upstream webupstreams{
  # Directs to the process with least number of connections.
  least_conn;
  server 127.0.0.1:8080 max_fails=0 fail_timeout=10s;
  server localhost:8080 max_fails=0 fail_timeout=10s;

  server 127.0.0.1:2368 max_fails=0 fail_timeout=10s;
  server localhost:2368 max_fails=0 fail_timeout=10s;
  keepalive 512;

  keepalive 512;
}

server {
  listen 80;
  server_name app.website.tld;
  client_max_body_size 16M;
  keepalive_timeout 10;

  # Make site accessible from http://localhost/
  root /var/www/[app-name]/app;
  location / {
    proxy_pass http://webupstreams;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
  }
}
server {
    listen 80;
    server_name blog.website.tld;
    access_log /var/log/blog.website.tld/logs.log;
    root /var/www/[cms-root-folder|ghost|etc.]

    location / {
        proxy_pass http://webupstreams;
        #proxy_http_version 1.1;
        #proxy_pass http://127.0.0.1:2368;
        #proxy_redirect off;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
    }
}

Example: Typical nginx configuration at /etc/nginx/sites-available/app-name

This article is an excerpt from “How to configure nginx as a nodejs application proxy server” article.

Conclusion

In this article, we revisited how to proxy multiple servers via one nginx instance, or nginx load-balancer for short. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

nodejs application project layouts

The project layout follows a set of conventions around the project's codebase structure. Such conventions can be adopted by a team, or taken verbatim from the community of developers. This article will explore commonly adopted nodejs applications projects layouts.

In this article we will talk about:

  • Minimalist layouts
  • Multi-repository layouts
  • Mono-repository layouts
  • Modular layouts

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book. Testing nodejs Applications Book Cover

Minimalist layouts

The minimalist layout applies to the simplest programs. Such programs may take from one file to a couple of files, preferably under 10 files. Anything that goes beyond 10 is subject to reach 100 or even 1000 down the road. Minimal projects can be liberal on the choice of the name of files, as the directory structure is not really needed. The rule of thumb is YAGNI. It makes sense to take as a few files as possible, as long as we do not have an idea how big the program has to grow. Categorizing files by type(category) under these circumstances makes complete sense. Example of type (category) file structure may looks as /utils.js, /index.js etc.

When the applications start to grow, meaning beyond 10+ files, then it makes complete sense to organize files under directories. The question is, should we take a category approach such as /utils/index.js, controllers/index.js, or does it makes more sense to organize files by utility(feature), for instance /inbox/utils/index.js or catalogue/models/index.js? The next paragraphs will provide clarity on this.

Project layout by category

There are multiple categories of small programs that make a software suite running. When those are sliced following the layered architecture(models, views, controllers, services, etc), the project structure becomes by category(or by kind). In the early days of a project, when there is no clear specialization, it makes sense to keep it simple and organize directories by category.

The problem becomes a little messier, when requirements about adding an inbox, or catalog, or any other major feature gets added to the project. The next paragraph shows how we can specialize in directory structure as features get added to the project.

Project layout by feature

It may take a little longer to realize how organizing projects by features the application has to provide is really simple, or easy to track project progress. When a new major feature is added to a project, it makes it hard to isolate or detect how far the project is getting done, by simply looking at the project layout. The file organization by feature makes it clear how many features are in a project.

There are some concerns that this strategy may make code reusability a challenge, if not a mess. When you look at it, those concerns are perfectly legit. For example, the /inbox feature may be having model/user.js. By the time /admin feature gets added to the project, it is almost guaranteed that /admin/model/user.js and /inbox/model/user.js will certainly be representing the same thing. There should be an approach that makes sharing cross-feature code feasible. That is how a hybrid project layout comes into play.

Feature/Category Hybrid project layout

The hybrid model combines what is the best in both project ” layout by feature” and ” layout by category”. The starting point is the feature-based project layout. Let's take an example where a feature is a catalog (or inventory) for instance. The catalog may have a controller, model, service, and a bunch of views. Using the minimalist approach, as long as the catalog has only one controller, one model one service, and one view, those single categories can be made .js files, otherwise directories. Let's assume that further iterations require adding inventory, and both the inventory and catalog share the product model. One thing we want to avoid is to have a dependency between inventory and catalog, so it makes sense to add a directory where shared code can be stored. Such directories have recognizable names such as: /common, /core, or /lib. Our product model can be moved to /core/models/product.js. It worth highlighting that /core is in this case organized by category and not by feature. This closes our case of hybrid project layout.

Multi-repository layouts

The multi-repository layout is often based on git's workspace model. A parent project may inject other projects managed by git. Individual projects can be organized by category or by feature, or a mix of both(hybrid). The evident use case is when we have Backend code, SAP for frontend code, a bunch of migration scripts, or even programs such as Widgets and CLI.

Mono-repository layouts

The mono-repository, also known as monorepo approach makes monolyth sexy and easy to work with. The monolyth put all programs under one roof, which makes it hard to deploy especially in a CI environment or when dependencies are tightly coupled. monolyth are those applications that have a database, business, and rendering logic embedded not only in the same repository but also running at the same time when deployed. They are hard to maintain, depending on how big the project turns out to be, and they are quite a challenge to adopt monolyth when a program is being shipped multiple times a day.

Modular layouts

The modular approach is more aligned with what npm and nodejs's /node_modules have to offer. Each and every top-level directory(-ish) can serve as an independent complete application module.

Conclusion

In this article, we revisited various project layout schemes and assessed requirements to adopt one over the other. We barely scratched the surface, but there are additional complementary materials in the “Testing nodejs applications” book that digs deeper into the subject.

References

tags: #monorepo #monolyth #project #nodejs

Enter your email to subscribe to updates.