Deploying nodejs applications

In this blog, we talk about challenges and opportunities associated with the deployment of nodejs applications on a production server — in a non-cloud environment as well as in a cloud-native environment.

The technique we are exploring is to run on a production server as we do on localhost, but expose the application to the world using an nginx reverse proxy server.

This blog post is a followup to two blog posts: “how to install nginx server” and “how to configure nginx as a nodejs reverse proxy server” and Easy nodejs deployment

In this article we will talk about:

If you are looking how to achieve nodejs zero downtime deployments, read the “How to achieve zero downtime deployment with nodejs instead.

Even though this blog post was designed to offer complementary materials to those who bought my book Testing nodejs Applications book book, the content can help any software developer to level up their knowledge. You may use this link to buy the book. Testing nodejs Applications Book Cover

nodejs configuration

There is a series of operations that takes place before the code hits the production environment and lands in hands of customers. One of those actions is packaging and will be discussed in the next sections.

While keeping in mind that some of those series of action may be of interest of the reader, we also have to be mindful that they cannot all be covered in this one single piece. But here is the deal, those steps have been covered in the following blog posts:

Now that we have an idea of how configuration works, let's revisit some release strategies.

Reducing Friction

The ” reducing the friction” idea comes from the need to make releases, packaging, and deployments easy, repeatable processes, easy to automate the whole release pipeline.

Reducing friction at deployment time involves reducing steps it takes from getting the binaries to making applications available to the world for use. It is now quite a jargon in these series, to use divide and conquer when simplifying a complex process. The following paragraphs are highlight reducing friction strategy at various states in the deployment process.

Reducing friction when releasing new software, involves reducing steps it takes from raw code to assembling a deployable package.

One way to reduce friction when releasing new software is to package the application into bundles. The bundles are going to be defined as ~ application code bundled together with its dependency libraries to form one unified software entity that can be deployed as a whole. The same approach is used in stacks written in Java, via .jar|.war files based releases.

The packaging strategy in nodejs context makes sense when the application is either destined to be deployed offline or in environments where downloading dependencies either constitutes a bottleneck or outright impossible. When that is the case, a .tar package is produced as a single versioned independent release.

Alternatively, a release tag that tells the installer how and where to get the application and its dependencies ~ npm and yarn package managers are some of the examples that use this approach. When releasing such packages, dependency libraries are not bundled together with the application, rather the package manager knows how to get the dependencies and how to install those at deployment time.

As it stands, steps similar to the ones described below can be followed:

Automated release strategies

Releases are like historical records of the state of the software at any given release date. The state of software can seat in for feature status and bug fixes.

As an example, if delivery is due every 1st day of a month, we would have trouble referencing the package of software in any discussion. A version number associated with software state|change-log makes it possible to refer to a particular delivery in the history of a project.

The widely adopted versioning follows Semantic Versioning (SemVer) specification. Three parts of SemVer are MAJOR.MINOR.PATCH. We anticipate breaking change to be introduced into the system when MAJOR is incremented. We anticipate new features and enhancements being introduced when MINOR is incremented. And we anticipate security patches and bug fixes when PATCH is incremented.

These conventions present an opportunity to design a plan of attack — or reaction — when deploying a patch, a minor, or upgrading to a new version of the application, in a consistent and predictable way. Reducing the friction when releasing new software, or automating updates that do not harm the system.

You broke my code: understanding the motivations for breaking changes in APIs

Build servers

The role of build servers is to provide a deployable build, every time there is a push to a tracked branch. The build can be of bundle, as well as managed package nature. Build servers coupled to [version control system](https://en.wikipedia.org/wiki/Versioncontrol)_ constitute the backbone of a Continuous Integration pipeline.

Non-exhaustive list of CI servers, and services: Distelli, Magnum and Strider

Automated deployment strategies

Making the application available to the world

The final step in working on any project is the ability to deploy it see it shine, or crash Deployment is moving code from a development environment to a production environment.

A deployment can be as simple as adding assets to a static assets server, and as complex as upgrading a database engine server. Downtime caused by complex deployments ranges from sub-system disruption to entire system. The keyword is the number of system changes involved in the deployment and the width of deployment window.

The following deployment strategies can be leveraged alone, or in combination, to deliver a deployment experience with less friction.

friction is defined by the number of moving parts in the system. The least systems involved while delivering a new software version, the better, the less friction.

The following are some of the deployment strategies that remove the need to have a rollback strategy.

The following are some of the tools that make deployment automation feasible.

When these two strategies are put together, they constitute a baseline for a continuous deployment model, that is cheap or free.

[How to install nginx server](), [Why coupling nodejs server to an nginx proxy server](), [How to configure nginx as a nodejs application proxy server]()

Push to deploy

The push-to-deploy model is a classic signature of Heroku. We should note that this technique is made possible by git.

In the push-to-deploy model, a push to a designated branch, say live or master triggers a task responsible to initiate the deployment sequence on another remote server. There are two sides of the coin we have to look at to make this model work: The server-side, and a post-receive hook shipped with the code it is supposed to deploy. The role of a post-receive hook is to detect the end of git source code download and run symlink+restart binaries.

## Server Side
#first time on server side  
apt-get update
apt-get install git

#updating|upgrading server side code
apt-get update

#create bare repository + post-receive hook 
#first time initiaalization
cd /path/to/git && mkdir appname.git
cd appname.git
git --bare init

Example: source

The post receive hook can be similar to the following snippet.

## Post-receive hook
cd /path/to/git/appname.git/hooks
touch post-receive
#text to add in post-receive
>>>#!/bin/sh
>>>GIT_WORK_TREE=/path/to/git/appname git checkout -f
>>>
#change permission to be an executable  file
chmod +x post-receive

The push-to-deploy idea removes the middleman otherwise necessary to move software from the development environment to any production environment. We will take production with a grain of salt, the production environment is relative to the system end-user. A production environment may in fact be UAT if we take testers as prescribed users. Beta, Alpha, and live environments are production environments from a customer standpoint.

This model may look attractive, but also be chaotic in the case of hundreds of developers shipping on a push keystroke! However, that may not be an issue, if the deployment is targeting a shared development environment.

Git WebHook

Manual deployment on a live server

Manual deployment has a multi-faceted aspect. The obvious fact, is when we use ssh to log into a remote server and execute deployment steps manually. There are some cli tools that make it possible to connect and execute deployment steps from a development environment. This model works, but not scalable, especially when multiple servers have to be managed.

Deploying on Cloud infrastructure.

Almost all cloud players provide infrastructure software. This makes it easy to download and deploy software for our application. The downside of deploying in the cloud is solely based on its pricing model.

Here are options that are available in the industry:

Conclusion

In this article, we revisited strategies to deploy a nodejs application. There are additional complimentary materials on this very subject in the “Testing nodejs applications” book.

References