In this blog, we talk about challenges and opportunities associated with the deployment of
nodejs applications on a production server — in a non-cloud environment as well as in a cloud-native environment.
The technique we are exploring is to run on a production server as we do on localhost, but expose the application to the world using an
nginx reverse proxy server.
This blog post is a followup to two blog posts: “how to install
nginxserver” and “how to configure
nodejsreverse proxy server” and Easy
In this article we will talk about:
If you are looking how to achieve
nodejszero downtime deployments, read the “How to achieve zero downtime deployment with
- Basic configurations
- Manual deployment on a live server
- Deploying on cloud infrastructure
- Leveraging push-to-deploy technique
gitWebHook to deploy new versions
- Atomic deployment
- Canary deployment
- Blue/Green deployment
Even though this blog post was designed to offer complementary materials to those who bought my book Testing
nodejsApplications book book, the content can help any software developer to level up their knowledge. You may use this link to buy the book.
There is a series of operations that takes place before the code hits the production environment and lands in hands of customers. One of those actions is packaging and will be discussed in the next sections.
While keeping in mind that some of those series of action may be of interest of the reader, we also have to be mindful that they cannot all be covered in this one single piece. But here is the deal, those steps have been covered in the following blog posts:
Now that we have an idea of how configuration works, let's revisit some release strategies.
The ” reducing the friction” idea comes from the need to make releases, packaging, and deployments easy, repeatable processes, easy to automate the whole release pipeline.
Reducing friction at deployment time involves reducing steps it takes from getting the binaries to making applications available to the world for use. It is now quite a jargon in these series, to use divide and conquer when simplifying a complex process. The following paragraphs are highlight reducing friction strategy at various states in the deployment process.
Reducing friction when releasing new software, involves reducing steps it takes from raw code to assembling a deployable package.
One way to reduce friction when releasing new software is to package the application into bundles. The bundles are going to be defined as ~ application code bundled together with its dependency libraries to form one unified software entity that can be deployed as a whole. The same approach is used in stacks written in Java, via
.jar|.war files based releases.
The packaging strategy in
nodejs context makes sense when the application is either destined to be deployed offline or in environments where downloading dependencies either constitutes a bottleneck or outright impossible. When that is the case, a
.tar package is produced as a single versioned independent release.
Alternatively, a release tag that tells the installer how and where to get the application and its dependencies ~
yarn package managers are some of the examples that use this approach. When releasing such packages, dependency libraries are not bundled together with the application, rather the package manager knows how to get the dependencies and how to install those at deployment time.
As it stands, steps similar to the ones described below can be followed:
- Download source code ~
wgetin case of tarball package, or using
yarndiscoverable dependency packages. Discoverable packages using orchestration tools such as
docker swarmuse an approach similar to the one discussed in the previous two cases.
- Install source code ~ Download and binary installation can be comprised in one step. Our current step switches global directories so that baseline services can find them from the same address. For that, symlink directories such as
/configs, and application top-level directory.
- Switch Servers on ~ This step restarts servers underlying services that make the application visible to the world. Such services can be, but not limited to database service(
nginxand at last the application service itself.
- Validation ~ This step involves checking for failures in the system, post-release. It expects us to have a rollback strategy, in case something goes wrong at the last minute. In case of unacceptable failures, the strategy described above prescribes rolling back symlinks to previously deployed versions, and restart services. This will guarantee that the existing application keeps running, and the release is technically called off. In the case of containerized software, the orchestration makes it even easier, as failing nodes can simply be switched off, and the existing application keeps running as usual.
Automated release strategies
Releases are like historical records of the state of the software at any given release date. The state of software can seat in for feature status and bug fixes.
As an example, if delivery is due every 1st day of a month, we would have trouble referencing the package of software in any discussion. A version number associated with software state|change-log makes it possible to refer to a particular delivery in the history of a project.
The widely adopted versioning follows Semantic Versioning (
SemVer) specification. Three parts of
MAJOR.MINOR.PATCH. We anticipate breaking change to be introduced into the system when
MAJOR is incremented. We anticipate new features and enhancements being introduced when
MINOR is incremented. And we anticipate security patches and bug fixes when
PATCH is incremented.
These conventions present an opportunity to design a plan of attack — or reaction — when deploying a patch, a minor, or upgrading to a new version of the application, in a consistent and predictable way. Reducing the friction when releasing new software, or automating updates that do not harm the system.
The role of build servers is to provide a deployable build, every time there is a push to a tracked branch. The build can be of bundle, as well as managed package nature. Build servers coupled to [version control system](https://en.wikipedia.org/wiki/Versioncontrol)_ constitute the backbone of a Continuous Integration pipeline.
Automated deployment strategies
Making the application available to the world
The final step in working on any project is the ability to deploy it see it shine, or crash Deployment is moving code from a development environment to a production environment.
A deployment can be as simple as adding assets to a static assets server, and as complex as upgrading a database engine server. Downtime caused by complex deployments ranges from sub-system disruption to entire system. The keyword is the number of system changes involved in the deployment and the width of deployment window.
The following deployment strategies can be leveraged alone, or in combination, to deliver a deployment experience with less friction.
friction is defined by the number of moving parts in the system. The least systems involved while delivering a new software version, the better, the less friction.
The following are some of the deployment strategies that remove the need to have a rollback strategy.
- Atomic deployment
- Canary deployment
- Blue/Green deployment
The following are some of the tools that make deployment automation feasible.
- Using Push-to-deploy
gitWebHook to deploy new applications
- Using a deployment server
- Using asynchronous scheduled deployment jobs
When these two strategies are put together, they constitute a baseline for a continuous deployment model, that is cheap or free.
[How to install
nginxserver](), [Why coupling
nodejsserver to an
nginxproxy server](), [How to configure
nodejsapplication proxy server]()
Push to deploy
The push-to-deploy model is a classic signature of
Heroku. We should note that this technique is made possible by
In the push-to-deploy model, a push to a designated branch, say live or master triggers a task responsible to initiate the deployment sequence on another remote server. There are two sides of the coin we have to look at to make this model work: The server-side, and a post-receive hook shipped with the code it is supposed to deploy. The role of a
post-receive hook is to detect the end of
git source code download and run symlink+restart binaries.
## Server Side #first time on server side apt-get update apt-get install git #updating|upgrading server side code apt-get update #create bare repository + post-receive hook #first time initiaalization cd /path/to/git && mkdir appname.git cd appname.git git --bare init
The post receive hook can be similar to the following snippet.
## Post-receive hook cd /path/to/git/appname.git/hooks touch post-receive #text to add in post-receive >>>#!/bin/sh >>>GIT_WORK_TREE=/path/to/git/appname git checkout -f >>> #change permission to be an executable file chmod +x post-receive
The push-to-deploy idea removes the middleman otherwise necessary to move software from the development environment to any production environment. We will take production with a grain of salt, the production environment is relative to the system end-user. A production environment may in fact be UAT if we take testers as prescribed users. Beta, Alpha, and live environments are production environments from a customer standpoint.
This model may look attractive, but also be chaotic in the case of hundreds of developers shipping on a push keystroke! However, that may not be an issue, if the deployment is targeting a shared development environment.
- Continuous deployment with
- Setting up push-to-deploy with
- Rollback strategy. The rollback is a set of tasks to reverse a failed deployment.
Manual deployment on a live server
Manual deployment has a multi-faceted aspect. The obvious fact, is when we use
ssh to log into a remote server and execute deployment steps manually. There are some
cli tools that make it possible to connect and execute deployment steps from a development environment. This model works, but not scalable, especially when multiple servers have to be managed.
Deploying on Cloud infrastructure.
Almost all cloud players provide infrastructure software. This makes it easy to download and deploy software for our application. The downside of deploying in the cloud is solely based on its pricing model.
Here are options that are available in the industry:
- Heroku ~ notoriously known for the push-to-deploy model. A push to a tracked branch triggers an automatic deployment. This service provides most of the configuration needs out of the box as well.
- AWS ~ One of the major players in cloud space, makes it possible to deploy manually by uploading a
.jarfile. It also gives a CLI tool that can be turned into a full-fledged pipeline.
- Google ~ has the same line of offerings similar to Amazon's AWS. The main difference reliability on Open Source software tools and pricing model.
- PCF, OpenShift, and other OpenStack platforms offer the same or similar capabilities as described in previous sections.
In this article, we revisited strategies to deploy a
nodejs application. There are additional complimentary materials on this very subject in the “Testing
nodejs applications” book.