Simple Engineering

nginx

This article is going to explore how to deploy a nodejs application on a traditional linux server — in a non-cloud environment. Even though the use case is Ubuntu, any Linux distro or mac would work perfectly fine.

For information on deploying on non-traditional servers, read: “Deploying nodejs applications”. For zero-downtime knowledge, read “How to achieve zero downtime deployment with nodejs

In this article we will talk about:

  • Preparing nodejs deployable releases
  • Configuring nodejs deployment environment
  • Deploying nodejs application on bare metal Ubuntu server
  • Switching on nodejs application to be available to the world ~ adding nginx for the reverse proxy to make the application available to the world
  • post-deployment support — production support

Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book book, the content can help any software developer to level up their knowledge. You may use this link to buy the book. Testing nodejs Applications Book Cover

Preparing a deployable release

There are several angles to look at release and deployment from. There are also several ways to release nodejs code, npm and tar for instance, and that depending on the environment in which the code is designed to run. Amongst environments, server-side, universal, or command line are classic examples.

In addition, we have to take a look from a dependency management perspective. Managing dependencies at deployment time has two vectors to take into account: whether the deployment happens on online or offline.

For the code to be prepared for release, it has to be packaged. Two methods of packaging nodejs software, amongst other things, are managed packaging and bundling. More on this is discussed here

As a plus, versioning should be taken into consideration when preparing a deployable release. The versioning that is a little common circulation is SemVer.

Configuring deployment environment

Before we dive into deployment challenges, let's look at key software and configuration requirements.

As usual, first-time work can be hard to do. But the yield should be then predictable, flexible to improvement, and capable of being built-upon for future deployments. For context, the deployment environment we are talking about in this section is the production environment.

Two key configurations are an nginx reverse proxy server and nodejs. But, “Why coupling nodejs server to an nginx reverse proxy server”? The answer to this question is two folds. First, both nodejs and nginx are single-threaded non-blocking reactive systems. Second, the wide adoption of these two tools by the developer community makes it an easy choice, from both influence and availability of collective knowledge the developer community shares via forums/blogs and popular QA sites.

How to install nginx server ~ [there is an article dedicated to this](). How to configure nginx as a nodejs application proxy server ~ there is an article dedicated to this.

Additional tools to install and configure may include: mongod database server, redis server, monit for monitoring, upstart for enhancing the init system.

There is a need to better understand the tools required to run nodejs application. It is also our responsibility as developers to have a basic understanding of each tool and the roles it plays in our project, in order to figure out how to configure each tool.

Download source code

Starting from the utility perspective, there is quite a collection of tools that are required to run on the server, alongside our nodejs application. Such software needs to be installed, and updated ~ for patch releases, and upgraded ~ to new major versions to keep the system secure and capable(bug-free/enhanced with new features).

From the packaging perspective, both supporting tools and nodejs applications adhere to a packaging strategy that makes it easy to deploy. When packaging is indeed a bundle, wget/curl can be used to download binaries. When dealing with discoverable packages, npm/yarn/brew can also be used to download our application and its dependencies. Both operation yield same outcomes, which is un-packaging, configuration and installation.

To deploy versioned nodejs application on bare metal Ubuntu server ~ understanding file system tweaks such as symlink-ing for faster deployments can save time for future deployments.

#first time on server side  
$ apt-get update
$ apt-get install git

#updating|upgrading server side code
$ apt-get update
$ apt-get upgrade
$ brew upgrade 
$ npm upgrade 

# Package download and installs 
$ /bin/bash -c "$(curl -fsSL https://url.tld/version/install.sh)"
$ wget -O - https://url.tld/version/install.sh | bash

# Discoverable packages 
$ npm install application@next 
$ yarn add application@next 
$ brew install application

_Example: _

The command above can be automated via a scheduled task. Both npm and yarn support the installation of applications bundled in a .tar file. See an example of a simple script source. We have to be mindful to clean up download directories, to save disk space.

Switching on the application

It sounds repetitive, but running npm start does not guarantee the application to be visible outside the metal server box the application is hosted on. This magic belongs to the nginx reverse proxy we were referring to in earlier paragraphs.

A typical nodejs application needs to start one or more of the following services, each time to reboot the application.

# symlinking new version to default application path
$ ln -sfn /var/www/new/version/appname /var/www/appname 

$ service nginx restart #nginx|apache server
$ service redis restart #redis server
$ service restart mongod #database server in some cases
$ service appname restart #application itself

Example:

PS: Above services are managed with uptime

Adding nginx reverse proxy makes the application available to the outside world. Switching on/off the application can be summarized in one command: service nginx stop. Likewise, to switch off and back on can be issued in one command: service nginx restart.

Post-deployment support

Asynchronous timely tasks can be used to resolve a wide range of issues. Background tasks such as fetching updates from third-party data providers, system health check, and notifications, automated software updates, database cleaning, cache busting, scheduled expensive/CPU intensive batch processing jobs just to name a few.

It is possible to leverage existing asynchronous timely OS-provided tasks processing infrastructure to achieve any of the named use cases, as it is true for third-party tools to do exactly the same job.

Rule of thumb

The following is a mental model that can be applied to the common use cases of releases. It may be basic for DevOps professionals, but useful enough for developers doing some operations work as well.

  • Prepare deployable releases
  • Update and install binaries ~ using apt, brew etc.
  • Download binaries ~ using git,wget, curl or brew
  • symlink directories(/log, /config, /app)
  • Restart servers and services ~ redis, nginx, mongodb and app
  • When something goes bad ~ walk two steps back. That is our rollback strategy.

This model can be refined, to make most of these tasks repeatable and automated, deployments included.

Conclusion

In this article, we revisited quick easy, and most basic nodejs deployment strategies. We also revisited how to expose the deployed applications to the world using nginx as a reverse proxy server. There are additional complimentary materials in the “Testing nodejs applications” book.

References

#snippets #code #annotations #question #discuss

This article revisits essentials on how to install upstart an event based daemon for starting/stopping tasks on development and production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

There are a plethora of task execution solutions, for instance systemd and init, rather complex to work with. That makes upstart a good alternative to such tools.

In this article you will learn about:

  • Tools available for task execution
  • How to install upstart task execution
  • How to write basic upstart task

Installing upstart on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a upstart on Linux using apt

Installing upstart on macOS

upstart is a utility designed mainly for Linux systems. However, macOS has its equivalent, launchctl designed to stop/stop processes prior/after the system restarts.

Installing upstart on a Windows machine

Whereas macOS systems and Linux are quite relax when it comes to working with system processes, Windows is a beast on its own way. upstart was built for *nix systems but there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing.

Automated upgrades

Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades.

Following the SemVer ~ aka Semantic Versioning standard, it is recommended that the only pair minor versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches do not introduce breaking changes. Those can therefore be automated.

In case of a critical infrastructure piece of processes state management calibre, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Upstart provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install upstart on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.

References

#nodejs #homebrew #UnattendedUpgrades #nginx #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications

This article revisits essentials on how to install nginx non blocking single threaded multipurpose web server on development and production servers.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

Installing nginx on Linux

It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:

$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates

Example: updating aptitude binaries

At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.

Installing a nginx on Linux using apt

Updating/Upgrading or first install of nginx server can be achieved with by the following commands.

$ sudo add-apt-repository ppa:nginx/stable
$ sudo apt-get update 
$ sudo apt-get install - nginx 

# To restart the service:
$ sudo service nginx restart 

Example: updating PPA and installing nginx binaries

Adding nginx PPA in first step is only required for first installs, on a system that does not have the PPA available in the system database.

Installing nginx on macOS

In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available.

homebrew is a package manager and software installation tool that makes most developer tools installation a breeze.

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Example: installation instruction as provided by brew.sh

Generally speaking, this is how to install/uninstall things with brew

$ brew install wget 
$ brew uninstall wget 

Example: installing/uninstalling wget binaries using homebrew

We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.

It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.

$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update <formula>           # Updates one formula

$ brew install <formula@version>    # Installs <formula> at a particular version.
$ brew tap <formular@version>/brew  # Installs <formular> from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap <formular> && brew tap <formula>   
$ brew services start <formular>@<version>

Example: key commands to work with homebrew cli

For more informations, visit: Homebrew ~ FAQ.

Installing a nginx on a Mac using homebrew

$ brew install nginx@1.17.8  # as in <formula>@<version>

Example: installing nginx using homebrew

Installing nginx on a Windows machine

MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a nodejs environment on a Mac. This is an easy target as nginx gives windows binaries that we can download and install on a couple of clicks.

Automated upgrades

Before we dive into automatic upgrades, we should consider nuances associated to managing an nginx deployment. The updates fall into two major, quite interesting, categories: patch updates and version upgrades.

Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions. On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.

In case of a WebServer, breaking changes may be introduced when a critical configuration setting is added, or dropped between two successive versions.

We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.

In the context of Linux, we will use the unattended-upgrades package to do the work.

$ apt-get install unattended-upgrades apticron

Example: install unattended-upgrades

Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.

Unattended-Upgrade::Allowed-Origins {
//  "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security"; # upgrading security patches only 
//   "${distro_id}:${distro_codename}-updates";  
//  "${distro_id}:${distro_codename}-proposed";
//  "${distro_id}:${distro_codename}-backports";
};

Unattended-Upgrade::Package-Blacklist {
    "vim";
};

Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades

The next step is necessary to make sure unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week.

APT::Periodic::Update-Package-Lists "1";            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages "1";   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval "7";               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade "1";              # install downloaded packages once a day

Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades

This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.

EMAIL="<email>@<host.tld>"

Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf

Conclusion

In this article we revisited ways to install nginx on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.

Reading list

#nodejs #homebrew #UnattendedUpgrades #nginx #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications

Your encryption will need keys


As opposed to other most SSL certificate issuers, Let's encrypt is not only free to use but also easy to install and update. This write-up highlights steps I followed to install mine on both Hoo.gy.

This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers to setup working environment. Testing Nodejs Applications Book Cover You can grab a copy of this book on this link

The motivation to write this article is threefold. First, this article serves as a reference for future needs, using Let's encrypt. Second, sharing experience with developer community is another form of learning. Third, as a token of appreciation to the team that democratizes this core part of security infrastructure.

This blog was first published under the title: “How to install Let's Encypt SSL certificate on ubuntu and nginx server “ on dev.to and Medium.

In this article you will learn:

  • What are required binaries, and how to install those
  • How to generate required Keys and Certificates
  • How to install certificate(s) on an nginx instance
  • How to enforce HTTS redirection
  • How to automate certificate (Auto-)renewal

Acknowledgments: To my friend and security nerd @jc_uwimpuhwe for proof-reading and idea enrichments.

Install necessary software

Hoo.gy runs on Ubuntu 14 LTS Linux box located at NYC DigitalOcean datacenter. The NodeJS web server is coupled with Nginx. From this perspective, I will suppose your system runs a similar stack.

Certificate Issuance is done via a bot, Certbot, and covers a wide variety of Operating Systems and Web Servers. First, the step was to update and install latest packages, as well as making sure Ubuntu includes a new source of packages.

$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx 

The second step installs certbot on the box. The following command is good for Nginx server, but more can be found at eff.org.

There are two possible modes to generate SSL certificates. That is going to be the subject of the following section.

Generate Key and Certificate

The default mode is designed for regular Linux users. Everything is taken care of after you run the next command. Certbot generates proper keys+certificate and automatically update Nginx configuration files.

 $ sudo certbot --nginx 

For more advanced users, who rather like they key+certificate, and install certificates as it pleases them, this command will help them:

 $ sudo certbot --nginx certonly

After generating private key and certificate, in addition, you will need to install certificates, re-enforce HTTPS redirection, and to automate certificate renewals.

Install Certificate(s) on Nginx

Since you are already running Nginx in production, chances are you don't want anything to mess with your custom configurations. The second command gives you just that. There are two ways to do install certificates: first is to keep your configurations intact, and symlink new certificates. The second is obviously to change configuration links to a new location. I preferred the latter.

# in /etc/nginx/sites-enabled/[your-config-file]
server{
  ...  
  listen 443 ssl;
  server_name example.com;
  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;  
  ...
}

For some reasons, I was accustomed to chained.crt for certificate and key.crt for the keys. The switch is not that difficult though. Certbot generates same files under slightly different names: privatekey.pem for keys, and fullchain.pem for certificates.

Enforce HTTPS redirection

There is a lot of discussions on this topic. Software developers are the worst people to have a discussion with. We always end up in tribalism, and who does it better. One way that made sense in my case, was to create a new server block in existing configuration file and redirect any request to marching this new server block, to a server block that listens only to HTTPS. The new block looks somehow like the following:

server{
 listen 80;
 server_name example.com www.example.com;
 
 # redirect, and rewrite all links to https://example.com 
 return 301 https://example.com$request_uri;
 
 # Alternatively: if you want to forward without rewriting requested URL 
 return 301 https://$server_name$request_uri;
}

Customization of nginx configuration can be a whole new topic on its own, but I cannot close this post without talking about two things: auto-renewal(using a cron job) and redirect all traffic to secure channel.

Automatic renewal

SSL certificates issues by Let's encrypt last for 90 days. Which is not a bad thing on its own. If you have 1000+ servers to update every 90 days, though ... that would be a nightmare. Whence you need some sort of automation. How to go about that, is the last topic in this blog post.

First of all, to renew certificate from Let's Encrypt takes a second ... if everything works according to the plan.

$ sudo certbot renew --dry-run
# or simply:
# $ sudo certbot renew
# restart the nginx server
$ sudo nginx restart 

If you are lucky to have a smaller configuration and used auto-renewal strategy, you may have a cronjob similar to the following:

# file: /etc/cron.d/certbot
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew

Code snippet from Ishigoya ~ Serverfault

It is possible to customize frequency, and restart command. The cronjob gives you a way to calibrate dates by using this star format: cronjob [minute, hour, the day of the month, month, the day of week].

  • This cron runs every day around 5:30 server time: _30 5 * * * ... _
  • This one runs same time, every bi-monthly(1st and 15th) 30 5 1,15 * * ...
  • This last one runs same time, every two months: 30 5 * */2 *

So your cronjob may end up looking like the following if you wish to renew certificates every day around 5:30 server time.

30 5 * * * certbot renew --post-hook "service nginx restart"
# alternatively: using systemctrl 
# 30 5 * * * certbot renew --post-hook "systemctl reload nginx"

Please consider a donation for this service to service at Let's Encrypt.Even though thoughts discussed in this article seems simple(not to say naive security-wise), there are enhancements done atop Let’s Encrypt foundations, that makes this choice rock solid. If you work in Containers environment, especially Kubernetes, you should check Kelsey Hightower’s Kube Cert Manager project on Github. Netflix’s Lemur is another alternative to manage certificates, you can read an introductory article here.

Reading list

#nginx #ssl #letsencrypt #devops #HowTo #TestingNodejsApplications