How to configure nodejs applications

Configuration is at forefront of any applications. This article discusses a couple strategies to go about nodejs configurations, and some tools available that can be leveraged to that end.

Techniques explained in this blog, are also available with more details in “Configurations” chapter of the “Testing nodejs Applications” book. You can grab a copy of this book on this link. Testing Nodejs Applications Book Cover

In this article you will learn about:

Layers of configuration of nodejs applications

Although this blog article provides overview of tools and configurations, it leaves modularization of configurations in a nodejs setting to another blog post: “Modularize nodejs configurations”.

From production readiness perspective, there are two distinctive layers of application configurations, or at least in context of this blog post.

The first layer consists of configurations required by the system that is going to be hosting nodejs application. Database server settings, monitoring tools, SSH keys, and other third party programs running on the hosting entity, are few examples that fall under this category. We will refer to these as system variables/settings.

The second layer consists of configuration that nodejs application needs to execute intrinsic business logic. They will be referred to as environment variables/settings. Third party issued secret keys or server port number configurations, fall under this category. Most of cases, you will find such configurations in static variables found in the application.

This blog will be about working with the first layer: system settings.

For disambiguation, the system is a computing entity composed of software(an operating system, etc) and hardware(virtual or physical).

Managing system configuration variables

Since environment variable layer of the configuration is technically embedded within the code that uses it by default, changes in configurations are in sync with the code, and vice-versa.

Unlike the environment variables, system variables are not managed the same way nodejs applications they run is managed. Just because our application's new version saw some changes in environment settings, does not mean that the nginx server has its own settings changed as well. From another perspective, just because nginx latest version saw some changes in its settings, does not necessarily mean that our nodejs application environment settings have to change as well.

The problem we constantly face, is to figure out how to manage changes in configurations as the code evolves, and as the underlying system software evolves.

Things become a bit complicated, at least to manage when third party software(database, monitoring tools) code change also involves configuration changes. We will have to be informed about changes at hand, which is not always evident as those changes are released at will of the vendors, and not necessarily communicated to us in realtime. Next, we will have to figure out where every single configuration is located on our system, the apply new modifications. Additional complexity comes in when new changes become incompatible with our current version of nodejs application code, or when rollbacks are not avoidable.

The nodejs application code is not always in sync with the system that hosts it. This is where configuration management(aka CM) tools shine. Passing around both system and environment configuration variable values is a risky business, security-wise. This is where configuration provisioning tools come in handy.

Provisioning secrets at deployment time

In teams that have CI/CD implemented, every programmer has ability to deploy latest code version to production. With great powers comes great responsibilities. Making sensitive data accessible to a larger audience, comes with an increased security risk – for instance leaking secret keys to the public.

The challenge relies on how to approach configuration data management, as a part of a software, giving access to developers ability to work with code, but limiting access to production configuration secrets.

The keyword is in provisioning production secrets at deployment time, and as a part of delivery step, and let any developer have own development secrets. This way, one compromised developer account cannot lead to organization wide data breach.

Example of tools that makes provisioning secrets possible: SecretHub, Kubernetes, HashiCorp Vault, etc.

Reducing configuration changes when new system applications rollout

The 12 App Factor suggests to manage configuration as code. That makes it fast to deploy application anywhere, with less code change upon receiving new code releases.

In most of applications that are not containerized, configurations can be stored on a file server, for example at /etc/config/[app-name]/config.ext. This works on a smaller scale. You will always realize that it becomes a problem to set up a new developer and production machines. But having such a convention in place, reduces the pain.

In case of managing multiple instances of same application, it is better to move this configuration inside the code, at least at build time, ideally at root: [app-root]/config/config.ext. At deployment time, there will be an additional symlinking step, to make sure the new deployment points to the right configuration files.

Configure nginx to serve nodejs application

nginx is a very good alternative to Apache server. Its non blocking, single threaded model makes it a perfect match to proxy nodejs applications. It is also possible to configure it as a load balancer.

The location of nginx configuration files depend on Operating System distribution, the application is hosted on. In our context, we assume that our operating system is Linux/Unix and nginx is installed and configured at /etc/nginx.

Some other possible places are /usr/local/nginx, /usr/local/etc/nginx or any other location depending on how the operating system manages its filesystem. The paths above are of course on Linux or Unix distributions.

We recommend reading “How to install nodejs and “How to install nginx for more in-depth information that may not be found in current blog post.

The magic happens at the upstream nodeapps section. This configuration plays the gateway role, and makes public a server that was otherwise private.


upstream nodeapps{
  # Directs to the process with least number of connections.
  least_conn;
  server 127.0.0.1:8080 max_fails=0 fail_timeout=10s;
  server localhost:8080 max_fails=0 fail_timeout=10s;
  keepalive 512;
}

server {
  listen 80;
  server_name app.website.tld;
  client_max_body_size 16M;
  keepalive_timeout 10;

  # Make site accessible from http://localhost/
  root /var/www/[app-name]/app;
  location / {
    proxy_pass http://nodeapps;
    proxy_http_version 1.1;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Real-IP $remote_addr;
  }
}

Example: Typical nginx configuration at /etc/nginx/sites-available/app-name

Configure redis to run with a nodejs server

redis is a minimalistic yet feature complete in memory key-value data store engine. The need to have redis in addition to a database arises from the need to make realtime possible in clustered/multi-process nodejs deployment. It is possible to run as a standalone or clustered environment.

The location of redis configuration files depend on Operating System distribution, the application is hosted on. In our context, we assume that our operating system is Linux/Unix and redis is installed and configured at /etc/redis.conf.

Some other possible places are /usr/local/redis, /usr/local/etc/redis or any other location depending on how the operating system manages its filesystem.

There is little to no configuration required to run a redis instance, and same configuration data can be passed as arguments at start time.

port 6380
maxmemory 2mb

Example:

To launch redis via CLI, the following command can be typed on the interface: – $ redis-server --port 6380 --slaveof 127.0.0.1 6379 in this configuration, redis starts on localhost as a slave of another instance running on port 6379. – $ redis-server /usr/local/etc/redis.conf in this configuration, redis starts using configuration settings stated in /usr/local/etc/redis.conf.

We recommend reading “How to install redis and “How to install nodejs for more in-depth information that may not be found in current blog post. redis configuration manual

Configure mongodb as a database server for nodejs project

mongodb is a noSQL database engine that covers most use cases a nodejs application can do. It is possible to configure the database in a cluster, as well as standalone mode.

The location of mongodb configuration files depend on Operating System distribution, the database server is hosted on. In our context, we assume that our operating system is Linux/Unix and mongodb is installed and configured at /etc/mongod.conf.

Some other possible places are /usr/local/mongodb, /usr/local/etc/mongodb or any other location depending on how the operating system manages its filesystem. As always, init scripts can be found at //

There is not a lot of configurations to change to run mongodb server. It is possible to start using the service right after installation, with one exception: when running multiple mongodb instances on the same server, or needing replication and shard features.

processManagement:
   fork: true
net:
  - bindIp: localhost
  + bindIp: localhost,10.8.0.10,192.168.4.24,/tmp/mongod.sock
   port: 27017
storage:
  - dbPath: /srv/mongodb
  + dbPath: /custom/path/to/mongodb
systemLog:
   destination: file
  - path: "/var/log/mongodb/mongod.log"
  + path: "/custom/path/to/mongod.log"
  + logRotate: rename
    logAppend: true
storage:
   journal:
      enabled: true
+ security:
   keyFile: /srv/mongodb/keyfile

Example: typical mongodb configuration in /etc/mongod.conf

We recommend reading “How to install mongodb and “How to install nodejs for more in-depth information that may not be found in current blog post, mongodb administration configuration manual

Configure nginx to serve WebSockets with an expressjs and socket.io application

The configuration exposed above does make a nodejs server running on a private network public. However, since the protocol of communication is HTTP, any other protocol, for instance WebSocket, trying to communicated on the same channel will yield an error. To make WebSocket work and using the same port as HTTP(port:80), we will need nginx to upgrade HTTP so that WebSocket messages can pass as well.

The script below executes tasks in following order:

#1 Tells nginx which version to upgrade to #2 Tells nginx to upgrade HTTP to version 1.1 #3 Tells nginx to upgrade upon receiving socket flash request

server{
  #...
  location /{
      proxy_http_version 1.1; #1
      proxy_set_header Upgrade $http_upgrade; #2
      proxy_set_header Connection "upgrade"; #3
  }
}

Example: 3 lines that enable nginx to serve WebSockets

Proxying WebSockets in an nginx configuration is based on ideas of Chriss Lea's blog post Proxying WebSockets with Nginx

Configure upstart to start nodejs application

With configurations we have at this point, the system applications are going to be useable, if we so state from the command line interface. That is, every application will have to be specifically executed.

The issue in production environment is that the terminal has to be closed at some point, once all tasks regarding a command line interface are completed. There is already some services such as init or systemctl that are already shipped with the system. Using upstart for starting and stopping any application is due to its ease of configurations, asynchronous and reactive nature of the tool, that other tools mentioned above lack.

upstart is a free and open source task runner. It was designed with Ubuntu Linux distribution in mind, but can also work on other Linux/Unix distributions. It has expressive task declaration that even newbies can feel comfortable using.

The location of upstart configuration files depend on Operating System distribution the application is hosted on. In our context, we assume that our operating system is Linux/Unix and upstart is installed and configured at /etc/upstart.

Some other possible places are /usr/local/upstart, /usr/local/etc/upstart or any other location depending on how the operating system manages its filesystem.

At the end of a successful configuration of tasks, we should be able to start all of the system applications and services using the following script, either by running one by one, or by making an extra executable file to simplify our task

As a reminder, the command follows following statement sudo service <servicename> <control>, where the service-name is technically our application and will be located at /etc/init/<servicename>.conf, whereas control is one of start/restart or stop keywords

# testing validity of configurations
init-checkconf /etc/init/nginx.conf
init-checkconf /etc/init/redis.conf
init-checkconf /etc/init/mongod.conf
init-checkconf /etc/init/appname.conf

# restart to re-use same script post deployment
service nginx   restart  
service redis   restart  
service mongodb restart  
service appname restart  

Example: tasks to start/restart all deployment applications in appname/bin/start.sh or on a command line

Alternatively, we should be able to stop services either one by one, or all of the services, using the scripts as in the following example

service nginx   stop  
service redis   stop  
service mongodb stop  
service appname stop  

# In case mongod stops to halt 
sudo /usr/bin/mongod -f /etc/mongod.conf --shutdown

Example: tasks to stop applications in appname/bin/stop.sh or on a command line

Now that we know how to launch our services, the problem remains on how do we configure each one of those services we are just running. The following are typical example, aforementioned services can be brought online.

# nginx

description "nginx http daemon"
author "Author Name"

start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]

env DAEMON=/usr/sbin/nginx
env PID=/var/run/nginx.pid

expect fork
respawn
respawn limit 10 5
#oom never

pre-start script
        $DAEMON -t
        if [ $? -ne 0 ]
                then exit $?
        fi
end script

exec $DAEMON

Example: nginx job descriptor in /etc/init/nginx.conf source

The job that will be executed by the redis service is as in the following script.

description "redis server"

start on runlevel [23]
stop on shutdown

pre-stop script
    rm /var/run/redis.pid
end script

script
  echo $$ > /var/run/redis.pid
  exec sudo -u redis /usr/bin/redis-server /etc/redis/redis.conf
end script

respawn limit 15 5

Example: redis job descriptor in /etc/init/redis.conf

If planning to use external monitoring service, respawn limit 15 5 should be either removed, or restart with the monitoring tool the failing service after 15 * 5 seconds time.

The job that will be executed by the mongodb service is as in the following script.

This example is minimalistic, more details can be found on this resource: Github mongod.upstart. Some tuning may be required before use.

#!upstart
description "mongodb server"
author      "author name <author@email>"

start on runlevel [23]
stop on shutdown

pre-stop script
    rm /var/run/mongod.pid
end script

script
  echo $$ > /var/run/mongod.pid
  exec sudo -u mongod /usr/bin/mongod -f /etc/mongod.conf
end script

respawn limit 15 5

Example: mongod job descriptor in /etc/init/mongod.conf

If planning to use external monitoring service, respawn limit 15 5 should be either removed, or restart with the monitoring tool the failing service after 15 * 5 seconds time.

The next and last step in this section, is an example of the script used to start the nodejs server. At this point, any disruption or un-handled problem in the application, will bring down the system as a whole. Other services will be up and running, unfortunately the nodejs server wont! To make failure recovery automatic, we will need yet another tool described in the next section.

#!upstart
description "appname nodejs server"
author      "author name <author@email>"

start on startup
stop on shutdown

script
    export HOME="/var" # this is required by node to be set 
    echo $$ > /var/run/appname.pid
    exec sudo -u appname sh -c "/usr/bin/node /var/www/appname/server.js >> /var/log/appname.log 2>&1"
end script

pre-start script
    # Date format same as (new Date()).toISOString() for consistency
    echo "[`date -u +%Y-%m-%dT%T.%3NZ`] Starting" >> /var/log/appname.log
end script

pre-stop script
    rm /var/run/appname.pid
    echo "[`date -u +%Y-%m-%dT%T.%3NZ`] Stopping" >> /var/log/appname.log
end script

Example: appname job descriptor in /etc/init/mongod.conf

We recommend reading “How to install upstart and “How to install nodejs for more in-depth information that may not be found in current blog post. The upstart event system, what it is and how to use it. On nginx blog: Ubuntu upstart.

Configure monit to monitor nodejs application

The previous section discussed how to automate starting/stopping services. However, when something goes unpredictably wrong, we will not be able to know that something bad happened, nor to tell which system is the culprit. Moreover, we will not be able to recover the from the failure at least by restarting the failing service.

The monitoring tool discussed henceforth, addresses most of the statement made above.

monit is a free and open source monitoring alternative. With a little bit of ingenuity, it is possible to use it to trigger tasks execution, such as sending an alert when something goes off the rails or restarting a failing application.

The location of monit configuration files depend on Operating System distribution the application is hosted on. In our context, we assume that our operating system is Linux/Unix and monit is installed and configured at /etc/monit.

Some other possible places are /usr/local/monit, /usr/local/etc/monit or any other location depending on how the operating system manages its filesystem.

The order of monitoring will go as following:

# The application
check host appname with address 127.0.0.1
    start "/sbin/start appname"
    stop "/sbin/stop appname"
    restart program  = "/sbin/restart appname"
    if failed port 80 protocol http
        request /ok
        with timeout 5 seconds
        then restart
    if cpu > 95% for 2 cycles then alert          # Alert on excessive usage of CPU
    if total cpu > 99% for 10 cycles then restart # Restart if CPU reaches 99 after 10 checks

# Checking using PID 
check process nginx with pidfile /var/run/nginx.pid
    start program = "/etc/init/nginx start"   # service nginx start
    stop program = "/etc/init/nginx stop"     # service nginx stop
    restart program  = "/etc/init/nginx restart"
    if failed port 80 protocol http then restart  # restart when process up, but not answering
    if failed port 443 protocol https then restart

check process redis with pidfile /var/run/redis.pid
    start program = "/etc/init/redis start"   # service redis start
    stop program = "/etc/init/redis stop"     # service redis stop
    if memory > 50 MB then alert
    if total memory > 500 MB then restart

check process mongod with pidfile /var/run/mongod.pid
    start program = "/etc/init/mongod start"   # service mongod start
    stop program = "/etc/init/mongod stop"     # service mongod stop
    restart program  = "/etc/init/mongod restart"
    if failed port 27017 protocol mongo then restart  
    if disk read > 10 MB/s for 2 cycles then alert  # Alert on slow reads 

Example: in /etc/minit/monitrc

To debug validity of /etc/monit/monitrc the following command can be used: monit -t. In case everything looks good, to start services using monit can be done with the following command: monit start all.

There is one more aspect that was not discussed in scripts above, and that is “How does monit knows where to send messages in case of alerting?”. The answer is in the next script, as provided by the monit documentation, but that I feel sharing in this blog post:

# Where to send the email
set alert foo@bar
# What message format 
set mail-format {
      from: Monit Support <monit@foo.bar>
  reply-to: support@domain.com
   subject: $SERVICE $EVENT at $DATE
   message: Monit $ACTION $SERVICE at $DATE on $HOST: $DESCRIPTION.
            Yours sincerely,
            monit
 }
 # Setting the mailserver, in our case, mailgun 
 set mailserver smtp.mailgun.org port 587
  username mailgunusr@domain.com password <PASSWORD>
  using <SSL> with timeout 30 seconds
# <SSL> can be SSLV2 | SSLV3 | TLSV1 | TLSV11 | TLSV12 | TLSV13

Example: custom alert messages in /etc/minit/monitrc

This is an example of few things that can be achieved. There are more that monit can do to enhance the deployment experience, and free of charge. Those things can include, but not limited to, schedule reporting, database backups, purging sessions or accounts that looks not right, as well as sending triggering tasks to send emails.

We recommend reading “How to install monit and “How to install nodejs for more in-depth information that may not be found in current blog post. Quick tutorial on monit. How to install and configure monit, Creating issues when something goes wrong

Conclusion

The two tools that tie the whole system together, also need a system to start and stop them. Luckily, the Linux/Unix environment provides a way to make daemons start at start time.

References

#snippets #configurations #questions #discuss #y2020 #Jan2020