<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>TDD &amp;mdash; Simple Engineering</title>
    <link>https://getsimple.works/tag:TDD</link>
    <description></description>
    <pubDate>Thu, 23 Apr 2026 14:43:22 +0000</pubDate>
    <item>
      <title>How to stub a stream function</title>
      <link>https://getsimple.works/how-to-stub-a-stream-function?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[The stream API provides a heavy-weight asynchronous computation model that keeps a small memory footprint. As exciting as it may sound, testing streams is somehow intimidating. This blog layout some key elements necessary to be successful when mocking stream API.&#xA;&#xA;  We keep in mind that there is a clear difference between mocking versus stub/spying/fakes even though we used mock interchangeably.&#xA;&#xA;In this article we will talk about: &#xA;&#xA;Understanding the difference between Readable and Writable streams &#xA;Stubbing Writable stream&#xA;Stubbing Readable stream &#xA;Stubbing Duplex or Transformer streams &#xA;&#xA;  Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book.  Testing nodejs Applications Book Cover&#xA;&#xA;Show me the code&#xA;&#xA;var  gzip = require(&#39;zlib&#39;).createGzip();//quick example to show multiple pipings&#xA;var route = require(&#39;expressjs&#39;).Router(); &#xA;//getter() reads a large file of songs metadata, transform and send back scaled down metadata &#xA;route.get(&#39;/songs&#39; function getter(req, res, next){&#xA;        let rstream = fs.createReadStream(&#39;./several-TB-of-songs.json&#39;); &#xA;        rstream.&#xA;            pipe(new MetadataStreamTransformer()).&#xA;            pipe(gzip).&#xA;            pipe(res);&#xA;        // forwaring the error to next handler     &#xA;        rstream.on(&#39;error&#39;, (error) =  next(error, null));&#xA;});&#xA;&#xA;  At a glance The code is supposed to read a very large JSON file of TB of metadata about songs, apply some transformations, gzip, and send the response to the caller, by piping the results on the response object. &#xA;&#xA;The next example demonstrates how a typical transformer such as MetadataStreamTransformer looks like   &#xA;&#xA;const inherit = require(&#39;util&#39;).inherits;&#xA;const Transform = require(&#39;stream&#39;).Tranform;&#xA;&#xA;function MetadataStreamTransformer(options){&#xA;    if(!(this instanceof MetadataStreamTransformer)){&#xA;        return new MetadataStreamTransformer(options);&#xA;    }&#xA;    this.options = Object.assign({}, options, {objectMode: true});//&lt;= re-enforces object mode chunks&#xA;    Transform.call(this, this.options);&#xA;}&#xA;inherits(MetadataStreamTransformer, Transform);&#xA;MetadataStreamTransformer.prototype.transform = function(chunk, encoding, next){&#xA;    //minimalistic implementation &#xA;    //@todo  process chunk + by adding/removing elements&#xA;    let data = JSON.parse(typeof chunk === &#39;string&#39; ? chunk : chunk.toString(&#39;utf8&#39;));&#xA;    this.push({id: (data || {}).id || random() });&#xA;    if(typeof next === &#39;function&#39;) next();&#xA;};&#xA;&#xA;MetadataStreamTransformer.prototype.flush = function(next) {&#xA;    this.push(null);//tells that operation is over &#xA;    if(typeof next === &#39;function&#39;) {next();}&#xA;};&#xA;&#xA;  Inheritance as explained in this program might be old, but illustrates good enough in a prototypal way that our  MetadataStreamTransformer inherits stuff from Stream#Transformer&#xA;&#xA;What can possibly go wrong?&#xA;&#xA;stubbing functions in stream processing scenario may yield the following challenges:&#xA;&#xA;How to deal with the asynchronous nature of streams &#xA;Identify areas where it makes sense to a stub, for instance: expensive operations &#xA;Identifying key areas needing drop-in replacements, for instance reading from a third party source over the network.&#xA;&#xA;Primer&#xA;&#xA;The keyword when stubbing streams is:&#xA;&#xA;To identify where the heavy lifting is happening. In pure terms of streams, functions that executes read() and write() are our main focus. &#xA;To isolate some entities, to be able to test small parts in isolation. For instance, make sure we test MetadataStreamTransformer in isolation, and mock any response fed into .pipe() operator in other places. &#xA;&#xA;  What is the difference between readable vs writable vs duplex streams? The long answer is available in substack&#39;s Stream Handbook&#xA;&#xA;Generally speaking, Readable streams produce data that can be feed into Writable streams. Readable streams can be .piped on, but not into.  Readable streams have readable|data events, and implementation-wise, implement .read() from Stream#Readable interface. &#xA;&#xA;Writable streams can be .piped into, but not on. For example, res  examples above are piped to an existing stream. The opposite is not always guaranteed. Writable streams also have writable|data events, and implementation-wise, implement .write() from Stream#Writable interface.&#xA;&#xA;Duplex streams go both ways. They have the ability to read from the previous stream and write to the next stream. Transformer streams are duplex, implement .transform() Stream#Transformer interface. &#xA;&#xA;Modus Operandi&#xA;&#xA;How to test the above code by taking on smaller pieces?&#xA;&#xA;fs.createReadStream won&#39;t be tested, but stubbed and returns a mocked readable stream &#xA;.pipe() will be stubbed to return a chain of stream operators&#xA;gzip and res won&#39;t be tested, therefore stubbed to returns a writable+readable mocked stream objects &#xA;rstream.on(&#39;error&#39;, cb) stub readable stream with a read error, spy on next() and check if it has been called upon &#xA;MetadataStreamTransformer will be tested in isolation and MetadataStreamTransformer.transform() will be treated as any other function, except it accepts streams and emits events  &#xA;&#xA;How to stub stream functions &#xA;&#xA;describe(&#39;/songs&#39;, () =  {&#xA;    before(() =  {&#xA;        sinon.stub(fs, &#39;createReadStream&#39;).returns({&#xA;            pipe: sinon.stub().returns({&#xA;                pipe: sinon.stub().returns({&#xA;                    pipe: sinon.stub().returns(responseMock)&#xA;                })&#xA;            }),&#xA;            on: sinon.spy(() =  true)&#xA;        })&#xA;    });&#xA;});&#xA;&#xA;This way of chained stubbing is available in our toolbox. Great power comes with great responsibilities, and wielding this sword may not always be a good idea. &#xA;&#xA;  There is an alternative at the very end of this discussion&#xA;&#xA;The transformer stream class test in isolation may be broken down to&#xA;&#xA;stub the whole Transform instance&#xA;Or stub the .push() and simulate a write by feeding in the readable mocked stream of data&#xA;&#xA;  the stubbed push() is a good place to add assertions&#xA;&#xA;it(&#39;_transform()&#39;, function(){&#xA;    var Readable = require(&#39;stream&#39;).Readable;&#xA;    var rstream = new Readable(); &#xA;    var mockPush = sinon.stub(MetadataStreamTransformer, &#39;push&#39;, function(data){&#xA;        assert.isNumber(data.id);//testing data sent to callers. etc&#xA;        return true;&#xA;    });&#xA;    var tstream = new MetadataStreamTransformer();&#xA;    rstream.push({id: 1});&#xA;    rstream.push({id: 2});&#xA;    rstream.pipe(tstream);&#xA;    expect(tstream.push.called, &#39;#push() has been called&#39;);&#xA;    mockPush.restore(); &#xA;});&#xA;&#xA;How to Mock Stream Response Objects&#xA;&#xA;The classic example of a readable stream is reading from a file. This example shows how mocking fs.createReadStream and returns a readable stream, capable of being asserted on. &#xA;&#xA;//stubb can emit two or more streams + close the stream&#xA;var rstream = fs.createReadStream();&#xA;sinon.stub(fs, &#39;createReadStream&#39;, function(file){ &#xA;    //trick from @link https://stackoverflow.com/a/33154121/132610&#xA;    assert(file, &#39;#createReadStream received a file&#39;);&#xA;    rstream.emit(&#39;data&#39;, &#34;{id:1}&#34;);&#xA;    rstream.emit(&#39;data&#39;, &#34;{id:2}&#34;);&#xA;    rstream.emit(&#39;end&#39;);&#xA;    return false; &#xA;});&#xA;&#xA;var pipeStub = sinon.spy(rstream, &#39;pipe&#39;);&#xA;//Once called this above structure will stream two elements: good enough to simulate reading a file.&#xA;//to stub gzip library: another transformer stream: producing &#xA;var next = sinon.stub();&#xA;//use this function| or call the whole route &#xA;getter(req, res, next);&#xA;//expectations follow: &#xA;expect(rstream.pipe.called, &#39;#pipe() has been called&#39;);&#xA;&#xA;Conclusion&#xA;&#xA;In this article, we established the difference between Readable and Writable streams and how to stub each one of them when unit test. &#xA;&#xA;Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the &#34;Testing nodejs applications&#34; book. &#xA;&#xA;References&#xA;&#xA;Testing nodejs Applications book&#xA;More on readable streams(Stream2) ~ Jimmy Chao ~ NeetHack Blog&#xA;QA: Mock Streams ~ StackOverflow Question&#xA;Mock System APIs ~ Gleb Bahmutov Blog&#xA;Streaming to Mongo available for shard-ed clusters ~ mongodb Docs&#xA;Source code of glob stream to know more about using Glob Stream &#xA;How to TDD Streams&#xA;Testing with vinyl for writing to files&#xA;&#xA;tags: #snippets #TDD #streams #nodejs #mocking]]&gt;</description>
      <content:encoded><![CDATA[<p>The stream API provides a heavy-weight asynchronous computation model that keeps a small memory footprint. As exciting as it may sound, testing streams is somehow intimidating. This blog layout some key elements necessary to be successful when mocking stream API.</p>

<blockquote><p>We keep in mind that there is a clear difference between mocking versus stub/spying/fakes even though we used mock interchangeably.</p></blockquote>

<p><strong><em>In this article we will talk about:</em></strong></p>
<ul><li>Understanding the difference between Readable and Writable streams</li>
<li>Stubbing Writable stream</li>
<li>Stubbing Readable stream</li>
<li>Stubbing Duplex or Transformer streams</li></ul>

<blockquote><p>Even though this blog post was designed to offer complementary materials to those who bought my <strong><em><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>, the content can help any software developer to tuneup working environment. <strong><em><a href="https://bit.ly/2ZFJytb">You use this link to buy the book</a></em></strong>.  <a href="https://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing nodejs Applications Book Cover"/></a></p></blockquote>

<h2 id="show-me-the-code" id="show-me-the-code">Show me the code</h2>

<pre><code class="language-JavaScript">var  gzip = require(&#39;zlib&#39;).createGzip();//quick example to show multiple pipings
var route = require(&#39;expressjs&#39;).Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get(&#39;/songs&#39; function getter(req, res, next){
        let rstream = fs.createReadStream(&#39;./several-TB-of-songs.json&#39;); 
        rstream.
            pipe(new MetadataStreamTransformer()).
            pipe(gzip).
            pipe(res);
        // forwaring the error to next handler     
        rstream.on(&#39;error&#39;, (error) =&gt; next(error, null));
});
</code></pre>

<blockquote><p><strong><em>At a glance</em></strong> The code is supposed to read a very large JSON file of TB of metadata about songs, apply some transformations, <code>gzip</code>, and send the response to the caller, by piping the results on the response object.</p></blockquote>

<p>The next example demonstrates how a typical transformer such as <code>MetadataStreamTransformer</code> looks like</p>

<pre><code class="language-JavaScript">const inherit = require(&#39;util&#39;).inherits;
const Transform = require(&#39;stream&#39;).Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    this.options = Object.assign({}, options, {objectMode: true});//&lt;= re-enforces object mode chunks
    Transform.call(this, this.options);
}
inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === &#39;string&#39; ? chunk : chunk.toString(&#39;utf8&#39;));
    this.push({id: (data || {}).id || random() });
    if(typeof next === &#39;function&#39;) next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === &#39;function&#39;) {next();}
};
</code></pre>

<blockquote><p>Inheritance as explained in this program might be old, but illustrates good enough in a prototypal way that our  <code>MetadataStreamTransformer</code> inherits stuff from<code>Stream#Transformer</code></p></blockquote>

<h2 id="what-can-possibly-go-wrong" id="what-can-possibly-go-wrong">What can possibly go wrong?</h2>

<p>stubbing functions in stream processing scenario may yield the following challenges:</p>
<ul><li>How to deal with the asynchronous nature of streams</li>
<li>Identify areas where it makes sense to a stub, for instance: expensive operations</li>
<li>Identifying key areas needing drop-in replacements, for instance reading from a third party source over the network.</li></ul>

<h2 id="primer" id="primer">Primer</h2>

<p>The keyword when stubbing streams is:</p>
<ul><li>To identify where the heavy lifting is happening. In pure terms of streams, functions that executes <code>_read()</code> and <code>_write()</code> are our main focus.</li>
<li>To isolate some entities, to be able to test small parts in isolation. For instance, make sure we test <code>MetadataStreamTransformer</code> in isolation, and mock any response fed into <code>.pipe()</code> operator in other places.</li></ul>

<blockquote><p>What is the difference between readable vs writable vs duplex streams? The long answer is available in <a href="https://github.com/substack/stream-handbook"><code>substack</code>&#39;s Stream Handbook</a></p></blockquote>

<p>Generally speaking, Readable streams produce data that can be feed into Writable streams. Readable streams can be <code>.pip</code><strong>ed</strong> on, but not into.  Readable streams have <code>readable|data</code> events, and implementation-wise, implement <code>._read()</code> from <code>Stream#Readable</code> interface.</p>

<p>Writable streams can be <code>.pip</code><strong>ed</strong> into, but not on. For example, <code>res</code>  examples above are piped to an existing stream. The opposite is not always guaranteed. Writable streams also have <code>writable|data</code> events, and implementation-wise, implement <code>_.write()</code> from <code>Stream#Writable</code> interface.</p>

<p>Duplex streams go both ways. They have the ability to read from the previous stream and write to the next stream. Transformer streams are duplex, implement <code>._transform()</code> <code>Stream#Transformer</code> interface.</p>

<h2 id="modus-operandi" id="modus-operandi">Modus Operandi</h2>

<p>How to test the above code by taking on smaller pieces?</p>
<ul><li><code>fs.createReadStream</code> won&#39;t be tested, but stubbed and returns a mocked readable stream</li>
<li><code>.pipe()</code> will be stubbed to return a chain of stream operators</li>
<li><code>gzip</code> and <code>res</code> won&#39;t be tested, therefore stubbed to returns a writable+readable mocked stream objects</li>
<li><code>rstream.on(&#39;error&#39;, cb)</code> stub readable stream with a read error, spy on <code>next()</code> and check if it has been called upon</li>
<li><code>MetadataStreamTransformer</code> will be tested in isolation and <code>MetadataStreamTransformer._transform()</code> will be treated as any other function, except it accepts streams and emits events<br/></li></ul>

<h2 id="how-to-stub-stream-functions" id="how-to-stub-stream-functions">How to stub stream functions</h2>

<pre><code class="language-JavaScript">describe(&#39;/songs&#39;, () =&gt; {
    before(() =&gt; {
        sinon.stub(fs, &#39;createReadStream&#39;).returns({
            pipe: sinon.stub().returns({
                pipe: sinon.stub().returns({
                    pipe: sinon.stub().returns(responseMock)
                })
            }),
            on: sinon.spy(() =&gt; true)
        })
    });
});
</code></pre>

<p>This way of chained stubbing is available in our toolbox. Great power comes with great responsibilities, and wielding this sword may not always be a good idea.</p>

<blockquote><p>There is an alternative at the very end of this discussion</p></blockquote>

<p>The transformer stream class test in isolation may be broken down to</p>
<ul><li>stub the whole Transform instance</li>
<li>Or stub the <code>.push()</code> and simulate a write by feeding in the readable mocked stream of data</li></ul>

<blockquote><p>the stubbed <code>push()</code> is a good place to add assertions</p></blockquote>

<pre><code class="language-JavaScript">it(&#39;_transform()&#39;, function(){
    var Readable = require(&#39;stream&#39;).Readable;
    var rstream = new Readable(); 
    var mockPush = sinon.stub(MetadataStreamTransformer, &#39;push&#39;, function(data){
        assert.isNumber(data.id);//testing data sent to callers. etc
        return true;
    });
    var tstream = new MetadataStreamTransformer();
    rstream.push({id: 1});
    rstream.push({id: 2});
    rstream.pipe(tstream);
    expect(tstream.push.called, &#39;#push() has been called&#39;);
    mockPush.restore(); 
});
</code></pre>

<h2 id="how-to-mock-stream-response-objects" id="how-to-mock-stream-response-objects">How to Mock Stream Response Objects</h2>

<p>The classic example of a readable stream is reading from a file. This example shows how mocking <code>fs.createReadStream</code> and returns a readable stream, capable of being asserted on.</p>

<pre><code class="language-JavaScript">//stubb can emit two or more streams + close the stream
var rstream = fs.createReadStream();
sinon.stub(fs, &#39;createReadStream&#39;, function(file){ 
    //trick from @link https://stackoverflow.com/a/33154121/132610
    assert(file, &#39;#createReadStream received a file&#39;);
    rstream.emit(&#39;data&#39;, &#34;{id:1}&#34;);
    rstream.emit(&#39;data&#39;, &#34;{id:2}&#34;);
    rstream.emit(&#39;end&#39;);
    return false; 
});

var pipeStub = sinon.spy(rstream, &#39;pipe&#39;);
//Once called this above structure will stream two elements: good enough to simulate reading a file.
//to stub `gzip` library: another transformer stream: producing 
var next = sinon.stub();
//use this function| or call the whole route 
getter(req, res, next);
//expectations follow: 
expect(rstream.pipe.called, &#39;#pipe() has been called&#39;);
</code></pre>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In this article, we established the difference between Readable and Writable streams and how to stub each one of them when unit test.</p>

<p>Testing tends to be more of art, than a science, practice makes perfect. There are additional complimentary materials in the <strong>“Testing <code>nodejs</code> applications”</strong> book.</p>

<h2 id="references" id="references">References</h2>
<ul><li><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></li>
<li>More on readable streams(Stream2) ~ <a href="https://neethack.com/2013/12/understand-node-stream-what-i-learned-when-fixing-aws-sdk-bug/">Jimmy Chao ~ NeetHack Blog</a></li>
<li>QA: Mock Streams ~ <a href="https://stackoverflow.com/questions/33141012/how-to-mock-streams-in-nodejs">StackOverflow Question</a></li>
<li>Mock System APIs ~ <a href="https://glebbahmutov.com/blog/mock-system-apis/">Gleb Bahmutov Blog</a></li>
<li>Streaming to Mongo available for <code>shard</code>-ed clusters ~ <a href="https://docs.mongodb.com/manual/tutorial/change-streams-example/"><code>mongodb</code> Docs</a></li>
<li>Source code of <a href="https://github.com/wearefractal/glob-stream">glob stream</a> to know more about using Glob Stream</li>
<li><a href="https://stackoverflow.com/q/23141226/132610">How to TDD Streams</a></li>
<li><a href="https://gulpjs.org/writing-a-plugin/testing">Testing with vinyl for writing to files</a></li></ul>

<p>tags: <a href="https://getsimple.works/tag:snippets" class="hashtag"><span>#</span><span class="p-category">snippets</span></a> <a href="https://getsimple.works/tag:TDD" class="hashtag"><span>#</span><span class="p-category">TDD</span></a> <a href="https://getsimple.works/tag:streams" class="hashtag"><span>#</span><span class="p-category">streams</span></a> <a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:mocking" class="hashtag"><span>#</span><span class="p-category">mocking</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/how-to-stub-a-stream-function</guid>
      <pubDate>Thu, 17 Jun 2021 06:08:53 +0000</pubDate>
    </item>
    <item>
      <title>Overview on testing nodejs applications </title>
      <link>https://getsimple.works/overview-on-testing-nodejs-applications?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This post highlights snapshots on best practices/hacks, to code, test, deploy and to maintain large-scale nodejs apps. It provides big lines on what became a book on testing nodejs applications.  &#xA;&#xA;  If you haven&#39;t yet, read the How to make nodejs applications modular article. This article is an overall follow-up.&#xA;&#xA;Like some of the articles that came before this one, we are going to focus on a simple question as our north star: What are the most important questions developers have when testing a nodejs application? When possible a quick answer will be provided, else we will point in the right direction where information can be found. &#xA;&#xA;In this article we will talk about: &#xA;&#xA;BDD versus TDD &#xA;Choosing the right testing tools &#xA;Testing servers&#xA;Testing modules &#xA;Testing routes&#xA;Testing controllers &#xA;Testing services &#xA;Testing middleware &#xA;Testing asynchronous code&#xA;Testing models&#xA;Testing WebSockets &#xA;Testing background jobs &#xA;&#xA;  Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book.  Testing nodejs Applications Book Cover&#xA;&#xA;Show me the code&#xA;&#xA;var express = require(&#39;express&#39;),&#xA;  app = express(),&#xA;  server = require(&#39;http&#39;).createServer(app);&#xA;//...&#xA;require(&#39;./config&#39;);&#xA;require(&#39;./utils/mongodb&#39;);&#xA;require(&#39;./utils/middleware&#39;)(app);&#xA;require(&#39;./routes&#39;)(app);&#xA;require(&#39;./realtime&#39;)(app, server)&#xA;//...&#xA;module.exports.server = server; &#xA;Example:&#xA;&#xA;  The code provided here is a recap of How to make nodejs applications modular article. You may need to give it a test drive, as this section highlights an already modularized example. &#xA;&#xA;Testing&#xA;&#xA;Automation is what developers do for a living. Manual testing is tedious, repetitive, and those are two key characteristics of things we love automating. Automated testing is quite intimidating for newbies and veterans alike. Testing tends to be more of an art, the more you practice, the better you hone your craft.&#xA;&#xA;  In the blogosphere, - My node Test Strategy  ~ RSharper Blog. - nodejs testing essentials&#xA;&#xA;BDD versus TDD&#xA;&#xA;Why should we even test&#xA;&#xA;Testing is unanimous within the developers community, the question always is around how to go about testing. &#xA;&#xA;There is a discussion mentioned in the first chapter between @kentbeck, @martinfowler and @dhh that made rounds on social media, blogs and finally as a subject of reflection in the community. When dealing with legacy code, there should be a balance and only adopt tdd as one tool in our toolbox. &#xA;&#xA;In the book we do the following exercise alternative to classic tdd: read, analyze, modify if necessary, rinse and repeat. We cut the bullshit, and get to test whatever needs to be tested, and let nature take its course.&#xA;&#xA;One thing is clear: We cannot guarantee the sanity of a piece of code unless it is tested. The remaining question is on &#34;How&#34; to go about testing. &#xA;&#xA;  There is a summary of the discussions mentioned earlier, titled Is TDD Dead?. In the blogosphere, - BDD-TDD ~ RobotLovesYou Blog. - My node Test Strategy  ~ RSharper Blog - A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials &#xA;&#xA;What should be tested &#xA;&#xA;Before we dive into it, lets re-examine pros and cons of automated tests -- in the current case, Unit Tests.&#xA;&#xA;Pros:&#xA;&#xA;Steer release confidence &#xA;Prevents common use case and unexpected bugs&#xA;Help project&#39;s new developers better understand code &#xA;Improves confidence when refactoring code &#xA;Well tested product guarantees improves customer experience &#xA;&#xA;Cons:&#xA;&#xA;Take time to write&#xA;Increase learning curve&#xA;&#xA;At this point, if we agree that the pros outweigh the cons, we can set an ideal of testing everything. Those are features of a product or functions of code. Re-testing large applications manually are daunting, exhausting, and sometimes simply not feasible. &#xA;&#xA;The good way to think about testing is not by thinking in terms of layers(controllers, models, etc.). Layers tend to be bigger. It is better to think in terms of something much smaller like a function(TDD way) or a feature(BDD way).&#xA;&#xA;Brief, every controller/business logic/utility libraries/nodejs servers/routes all features are also set to be tested ahead of release. &#xA;&#xA;  There is an article on this blog that gives more insight on -- How to create good test cases (Case   Feature   Expectations | GivenWhenThen) -- titled &#34;How to write test cases developers will love reading&#34;. In the blogosphere, - Getting started with nodejs and mocha&#xA;&#xA;Choosing the right testing tools&#xA;&#xA;There is no shortage of tools in nodejs community. The problem is analysis paralysis. Whenever the time comes to choose testing tools, there are layers that should be taken into account: test runners, test doubles, reporting, and eventually, if there is any compiler that needs to be added in the mix. &#xA;&#xA;Other than that, there is a list of a few things to consider when choosing a testing framework: - Learning curve - How easy to integrate into project/existing testing frameworks - How long does it take to debug testing code - Choice of the testing framework, and other testing tools consider - How good is documentation - How big is the community, and how good is the library maintained - What is may solve faster(Spies, Mocking, Coverage reports, etc) - Instrumentation and test reporting, just to name a few.&#xA;&#xA;  There are sections dedicated to providing hints and suggestions throughout the book. There is also this article &#34;How to choose the right tools&#34; on this blog that gives a baseline framework to choose, not only for testing frameworks but any tool. Finally, In the blogosphere, - jasmine vs. mocha, chai and sinon. - Evan Hahn has pretty good examples of the use of test doubles in How do I jasmine blog post.  - Getting started with nodejs and jasmine - has some pretty amazing examples, and is simple to start with. - Testing expressjs REST APIs with Mocha&#xA;&#xA;Testing servers&#xA;&#xA;The not-so-obvious part when testing servers is how to simulation of starting and stopping the server. These two operations should not bootstrap dependent servers(database, data-stores) or make side effects(network requests, writing to files) to reduce the risk associated with running an actual server. &#xA;&#xA;  There is a chapter dedicated to testing servers in the book. There is also this article on this blog that can give more insights. In the blogosphere, - How to correctly unit test express server - There is a better code structure organization, that makes it easy to test and get good test coverage on &#34;Testing nodejs with mocha&#34;. - How to correctly unit test express server&#xA;&#xA;Testing modules &#xA;&#xA;Testing modules is not that different from testing a function, or a class. When we start looking at this from this angle, things will be a little easy. &#xA;&#xA;The grain of salt: a module that is not directly a core component of our application, should be left alone and mocked out entirely when possible. This way we keep things isolated. &#xA;&#xA;  There are dedicated sections in every chapter about modularization, as well as a chapter dedicated to testing utility libraries(modules) in the book. There is also an entire series of articles -- a more theoretical: &#34;How to make nodejs applications modular and a more technical: &#34;How to modularize nodejs applications&#34; -- on this blog modularization techniques. In the blogosphere, - Export This: Interface Design Patterns for nodejs Modules Alon Salant, CEO of Good Eggs and nodejs module patterns using simple examples by Darren DeRider - How to modularize your Chat Application&#xA;&#xA;Testing routes&#xA;&#xA;Challenges while testing expressjs Routes&#xA;&#xA;Some of the challenges associated with testing routes are testing authenticated routes, mocking requests, mocking responses as well as testing routes in isolation without a need to spin up a server. When testing routes, it is easy to fall into integration testing trap, either for simplicity or for lack of motivation to dig deeper. &#xA;&#xA;  Integration testing trap is When a developer confuses integration test(or E2E) with unit test, and vice versa. The success of a balanced test coverage identifies sooner the king of tests adequate for a given context, what percentage of each kind of tests.&#xA;&#xA;For a test to be a unit test in route testing context, there will be - Focus to test code block(function, class, etc), not the output of a route - Mock requests to third party systems(Payment Gateway, Email Systems, etc) - Mock database read/write operations - Test worst-case scenario such as missing data and data-structure &#xA; &#xA;  There is a chapter dedicated to testing models in the book. There is also this article &#34;Testing expressjs Routes&#34; on this blog that gives more insight on the subject. In the blogosphere - A TDD approach to building a todo API using nodejs and mongodb - Marcus on supertest ~ Marcus Soft Blog&#xA;&#xA;Testing controllers &#xA;&#xA;When modularizing route handlers, there is a realization that they may also be grouped into a layer of their own, or event classes. In MVC jargon, this layer is also known as the controller layer. &#xA;&#xA;Challenges testing controllers, by no surprise, are the same when testing expressjs route handlers. The controller layer thrives when there is a service layer. Mocking database read/write operations, or service layers, that is not core/critical to validation of the controller&#39;s expectations are some of such challenges. &#xA;&#xA;Mocking controller request/response objects, and when necessary, some middleware functions. &#xA;&#xA;  There is a chapter dedicated to testing controllers in the book. There is also this article Testing nodejs controllers with expressjs framework on this blog that gives more insight on the subject. In the blogosphere, - This article covers Mocking Responses, etc -- How to test express controllers. &#xA;&#xA;Testing services &#xA;&#xA;There are some instances where adding a service layer makes sense. &#xA;&#xA;One of those instances is when an application has a collection of single functions under utility(utils). Chances are some of the functions under the utility umbrella may be related in terms of features, the functionality they offer, or both. Such functions are good to use case to be grouped under a class: service&#xA;&#xA;Another good example is for applications that heavily use the model. Chances are the same functions can be re-used in multiple instances, and fixing an issue involves multiple places to fix as well. When that is the case, such functions can be grouped under one banner, in such a way that an update to one function, gets reflected in every instance where the function has been used.&#xA;&#xA;From these two use cases, the testing service has no one-size fit-all testing strategy. Every case of service should be dealt with depending on the context it is operating in.  &#xA;&#xA;  There is a chapter dedicated to testing services in the book. In the blogosphere, - &#34;Building Structured Backends with nodejs and HexNut&#34; by Francis Stokes ~ aka @fstokesman on Twitter source ...&#xA;&#xA;Testing middleware &#xA;&#xA;The middleware in a sense are hooks that intercept, process and forward the result to the rest of the route in the expressjs (connectjs) jargon. It is by no surprise that testing middleware shares the same challenges as testing route handlers and controllers. &#xA;&#xA;  There is a chapter dedicated to testing middleware in the book. There is also this article &#34;Testing expressjs Middleware&#34; on this blog that gives more insight on the subject. In the blogosphere, - How to test expressjs controllers&#xA;&#xA;Testing asynchronous code&#xA;&#xA;Asynchronous code is a wide subject in nodejs community. Things ranging from regular callbacks, promises, async/await constructs, streams, and event streams(reactive) are all under an asynchronous umbrella.&#xA;&#xA;Challenges associated with asynchronous testing, depending on the use case and context at hand. However, there are striking similarities say, testing testing async/await versus a promise. &#xA;&#xA;When an object is available, it makes sense to get a hold on it, execute assertions once it resolves. That is feasible for promises, streams, async/await construct. However, when the object is some kind of event, then the hold on the object can be used to add a listener and assert once the listener is resolved. &#xA;&#xA;  There are multiple chapters dedicated to testing asynchronous code in the book. There are also multiple article on this blog that gives more insight on the subject such as - &#34;How to stub a stream function&#34; - &#34;How to Stub Promise Function and Mock Resolved Output&#34; - &#34;Testing nodejs streams&#34;. In the blogosphere, - &#xA;&#xA;Testing models&#xA;&#xA;  testing models goes hand in hand with mocking database access functions&#xA;&#xA;Functions that access or change database state can be replaced by spy fakes, custom function replacements capable to supply|emulate similar results as replaced functions. &#xA;&#xA;sinon may not make unanimity, but is a feature-complete battle-tested test double library, amongst many others to choose from.&#xA;&#xA;  There is a chapter dedicated to testing models in the book. There is also this article  on this blog that gives more insight on the subject. In the blogosphere, - Mocking/Stubbing/Spying mongoose models - stubbing mongoose model question and answers on StackOverflow - Mocking database calls by wrapping mongoose with mockgoose&#xA;&#xA;Testing WebSockets&#xA;&#xA;Some of the challenges testing WebSockets can be summarized as trying to simulate: - sending and receiving a message on the WebSocket endpoint. &#xA;&#xA;  There is a chapter dedicated to testing WebSockets in the book. There is also this article on this blog that can give more ideas on how to go about testing WebSocket endpoints -- another one on how to integrate WebSockets with nodejs. Elsewhere in the blogosphere, - Testing socket.io with mocha, should.js and socket.io client - sharing session between expressjs and socket.io&#xA;&#xA;Testing background jobs &#xA;&#xA;The background jobs bring batch processing to the nodejs ecosystem. Background jobs constitute a special use case of asynchronous communication that spans time and processes on which the system is running on. &#xA;&#xA;Testing this kind of complex construct, require distilling the fundamental work done by each function/construct, by focusing on the signal without losing the big picture. It requires quite a paradigm shift(word used with reservation).  &#xA;&#xA;  There is a chapter dedicated to testing background jobs in the book. There is an article Testing nodejs streams on this blog that gives more insight on the subject. In the blogosphere, - Mocking/Stubbing/Spying mongoose models ~ CodeUtopia Blog&#xA;&#xA;Conclusion &#xA;&#xA;Some source code samples came from QA sites such as StackOverflow, hackers gists, Github documentation, developer blogs, and from my personal projects. &#xA;&#xA;There are some aspects of the ecosystem that are not mentioned, not because they are not important, but because mentioning all of them can fit into a book. &#xA;&#xA;In this article, we highlighted what it takes to test various layers, at the same time make a difference between BDD/TDD testing schools. There are additional complimentary materials in the &#34;Testing nodejs applications&#34; book.  &#xA;&#xA;References&#xA;&#xA;Testing nodejs Applications book&#xA;Testing MEAN stack with Mocha ~ The Way of Code&#x9;~ &#34;How to build and test REST with nodejs Express Mocha&#34;&#xA;&#xA;#snippets #nodejs #testing #tdd #bdd]]&gt;</description>
      <content:encoded><![CDATA[<p>This post highlights snapshots on best practices/hacks, to code, test, deploy and to maintain large-scale <code>nodejs</code> apps. It provides big lines on what became a book on <em><a href="https://bit.ly/2ZFJytb">testing <code>nodejs</code> applications</a></em>.</p>

<blockquote><p>If you haven&#39;t yet, read the <a href="./how-to-make-nodejs-application-modular.md">How to make <code>nodejs</code> applications modular</a> article. This article is an overall follow-up.</p></blockquote>

<p>Like some of the articles that came before this one, we are going to focus on a simple question as our north star: <em>What are the most important questions developers have when testing a <code>nodejs</code> application?</em> When possible a quick answer will be provided, else we will point in the right direction where information can be found.</p>

<p><strong><em>In this article we will talk about:</em></strong></p>
<ul><li><a href="./overview-on-testing-nodejs-applications#bdd-versus-tdd">BDD versus TDD</a></li>
<li><a href="./overview-on-testing-nodejs-applications#choosing-the-right-testing-tools">Choosing the right testing tools</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-servers">Testing servers</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-modules">Testing modules</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-routes">Testing routes</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-controllers">Testing controllers</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-services">Testing services</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-middleware">Testing middleware</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-asynchronous-code">Testing asynchronous code</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-models">Testing models</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-websockets">Testing WebSockets</a></li>
<li><a href="./overview-on-testing-nodejs-applications#testing-testing-background-jobs">Testing background jobs</a></li></ul>

<blockquote><p>Even though this blog post was designed to offer complementary materials to those who bought my <strong><em><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>, the content can help any software developer to tuneup working environment. <strong><em><a href="https://bit.ly/2ZFJytb">You use this link to buy the book</a></em></strong>.  <a href="https://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing nodejs Applications Book Cover"/></a></p></blockquote>

<h2 id="show-me-the-code" id="show-me-the-code">Show me the code</h2>

<pre><code class="language-JavaScript">var express = require(&#39;express&#39;),
  app = express(),
  server = require(&#39;http&#39;).createServer(app);
//...
require(&#39;./config&#39;);
require(&#39;./utils/mongodb&#39;);
require(&#39;./utils/middleware&#39;)(app);
require(&#39;./routes&#39;)(app);
require(&#39;./realtime&#39;)(app, server)
//...
module.exports.server = server; 
</code></pre>

<p><em><em>Example</em>:</em></p>

<blockquote><p>The code provided here is a recap of <a href="./how-to-make-nodejs-application-modular.md">How to make <code>nodejs</code> applications modular</a> article. You may need to give it a test drive, as this section highlights an already modularized example.</p></blockquote>

<h2 id="testing" id="testing">Testing</h2>

<p>Automation is what developers do for a living. Manual testing is tedious, repetitive, and those are two key characteristics of things we love automating. Automated testing is quite intimidating for newbies and veterans alike. Testing tends to be more of an art, the more you practice, the better you hone your craft.</p>

<blockquote><p>In the blogosphere, – My <code>node</code> Test Strategy  ~ <a href="https://remysharp.com/2015/12/14/my-node-test-strategy">RSharper Blog</a>. – <a href="https://fredkschott.com/post/2014/05/nodejs-testing-essentials/"><code>nodejs</code> testing essentials</a></p></blockquote>

<h2 id="bdd-versus-tdd" id="bdd-versus-tdd">BDD versus TDD</h2>

<p><em>Why should we even test</em></p>

<p>Testing is unanimous within the developers community, the question always is around <em>how</em> to go about testing.</p>

<p>There is a discussion mentioned in the first chapter between @kentbeck, @martinfowler and @dhh that made rounds on social media, blogs and finally as a subject of reflection in the community. When dealing with legacy code, there should be a balance and only adopt <code>tdd</code> as one tool in our toolbox.</p>

<p>In the book we do the following exercise alternative to classic <code>tdd</code>: <strong><em>read, analyze, modify if necessary, rinse and repeat</em></strong>. We cut the bullshit, and get to test whatever needs to be tested, and let nature take its course.</p>

<p>One thing is clear: We cannot guarantee the sanity of a piece of code unless it is tested. The remaining question is on <em>“How”</em> to go about testing.</p>

<blockquote><p>There is a summary of the discussions mentioned earlier, titled <a href="https://martinfowler.com/articles/is-tdd-dead/">Is TDD Dead?</a>. In the blogosphere, – BDD-TDD ~ <a href="https://www.robotlovesyou.com/bdd-tdd/">RobotLovesYou Blog</a>. – My <code>node</code> Test Strategy  ~ <a href="https://remysharp.com/2015/12/14/my-node-test-strategy">RSharper Blog</a> – A TDD Approach to Building a Todo API Using <code>nodejs</code> and <code>mongodb</code> ~ <a href="https://semaphoreci.com/community/tutorials/a-tdd-approach-to-building-a-todo-api-using-node-js-and-mongodb">SemaphoreCI Community Tutorials</a></p></blockquote>

<h2 id="what-should-be-tested" id="what-should-be-tested">What should be tested</h2>

<p>Before we dive into it, lets re-examine <strong><em>pros</em></strong> and <strong>cons</strong> of automated tests — in the current case, Unit Tests.</p>

<p><strong>Pros</strong>:</p>
<ul><li>Steer release confidence</li>
<li>Prevents common use case and unexpected bugs</li>
<li>Help project&#39;s new developers better understand code</li>
<li>Improves confidence when refactoring code</li>
<li>Well tested product guarantees improves customer experience</li></ul>

<p><strong>Cons</strong>:</p>
<ul><li>Take time to write</li>
<li>Increase learning curve</li></ul>

<p>At this point, if we agree that the pros outweigh the cons, we can set an ideal of testing everything. Those are features of a product or functions of code. Re-testing large applications manually are daunting, exhausting, and sometimes simply not feasible.</p>

<p>The good way to think about testing is not by thinking in terms of layers(controllers, models, etc.). Layers tend to be bigger. It is better to think in terms of something much smaller like a function(TDD way) or a feature(BDD way).</p>

<p>Brief, every controller/business logic/utility libraries/<code>nodejs</code> servers/routes all features are also set to be tested ahead of release.</p>

<blockquote><p>There is an article on this blog that gives more insight on — How to create good test cases (Case &gt; Feature &gt; Expectations | GivenWhenThen) — titled <a href="how-to-write-test-cases-developers-will-love-reading">“How to write test cases developers will love reading”</a>. In the blogosphere, – <a href="https://semaphoreci.com/community/tutorials/getting-started-with-node-js-and-mocha">Getting started with <code>nodejs</code> and <code>mocha</code></a></p></blockquote>

<h2 id="choosing-the-right-testing-tools" id="choosing-the-right-testing-tools">Choosing the right testing tools</h2>

<p>There is no shortage of tools in <code>nodejs</code> community. The problem is <em>analysis paralysis</em>. Whenever the time comes to choose testing tools, there are layers that should be taken into account: test runners, test doubles, reporting, and eventually, if there is any compiler that needs to be added in the mix.</p>

<p>Other than that, there is a list of a few things to consider when choosing a testing framework: – Learning curve – How easy to integrate into project/existing testing frameworks – How long does it take to debug testing code – Choice of the testing framework, and other testing tools consider – How good is documentation – How big is the community, and how good is the library maintained – What is may solve faster(Spies, Mocking, Coverage reports, etc) – Instrumentation and test reporting, just to name a few.</p>

<blockquote><p>There are sections dedicated to providing hints and suggestions throughout the book. There is also this article <a href="./how-to-choose-the-right-tools">“How to choose the right tools”</a> on this blog that gives a baseline framework to choose, not only for testing frameworks but any tool. Finally, In the blogosphere, – <a href="https://thejsguy.com/2015/01/12/jasmine-vs-mocha-chai-and-sinon.html"><code>jasmine</code> vs. <code>mocha</code>, <code>chai</code> and <code>sinon</code></a>. – Evan Hahn has pretty good examples of the use of test doubles in <a href="https://evanhahn.com/how-do-i-jasmine/">How do I <code>jasmine</code></a> blog post.  – <a href="https://semaphoreci.com/community/tutorials/getting-started-with-node-js-and-jasmine">Getting started with <code>nodejs</code> and <code>jasmine</code></a> – has some pretty amazing examples, and is simple to start with. – <a href="https://51elliot.blogspot.ca/2013/08/testing-expressjs-rest-api-with-mocha.html">Testing <code>expressjs</code> REST APIs with Mocha</a></p></blockquote>

<h2 id="testing-servers" id="testing-servers">Testing servers</h2>

<p>The not-so-obvious part when testing servers is how to simulation of starting and stopping the server. These two operations should not bootstrap dependent servers(database, data-stores) or make side effects(network requests, writing to files) to reduce the risk associated with running an actual server.</p>

<blockquote><p>There is a chapter dedicated to testing servers in the book. There is also this [article on this blog that can give more insights](). In the blogosphere, – How to correctly unit test express server – There is a better code structure organization, that makes it easy to test and get good test coverage on <a href="https://brianstoner.com/blog/testing-in-nodejs-with-mocha/">“Testing <code>nodejs</code> with mocha”</a>. – <a href="https://glebbahmutov.com/blog/how-to-correctly-unit-test-express-server/">How to correctly unit test express server</a></p></blockquote>

<h2 id="testing-modules" id="testing-modules">Testing modules</h2>

<p>Testing modules is not that different from testing a function, or a class. When we start looking at this from this angle, things will be a little easy.</p>

<p>The grain of salt: a module that is not directly a core component of our application, should be left alone and mocked out entirely when possible. This way we keep things isolated.</p>

<blockquote><p>There are dedicated sections in every chapter about modularization, as well as a chapter dedicated to testing utility libraries(modules) in the book. There is also an entire series of articles — a more theoretical: <a href="./how-to-make-nodejs-applications-modular">“How to make <code>nodejs</code> applications modular</a> and a more technical: <a href="./modularizing-nodejs-applications">“How to modularize <code>nodejs</code> applications”</a> — on this blog modularization techniques. In the blogosphere, – <a href="http://bites.goodeggs.com/posts/export-this/">Export This: Interface Design Patterns for <code>nodejs</code> Modules</a> Alon Salant, CEO of Good Eggs and <a href="https://darrenderidder.github.io/talks/ModulePatterns/#/"><code>nodejs</code> module patterns using simple examples</a> by <a href="https://twitter.com/73rhodes">Darren DeRider</a> – <a href="https://medium.com/philosophie-is-thinking/modularize-your-chat-app-or-how-to-write-a-node-js-express-app-in-more-than-one-file-bfae2d6b69df#.hfb4r6z3i">How to modularize your Chat Application</a></p></blockquote>

<h2 id="testing-routes" id="testing-routes">Testing routes</h2>

<p><em>Challenges while testing <code>expressjs</code> Routes</em></p>

<p>Some of the challenges associated with testing routes are <em>testing authenticated routes</em>, <em>mocking requests</em>, <em>mocking responses</em> as well as <em>testing routes in isolation without a need to spin up a server</em>. When testing routes, it is easy to fall into <em>integration testing trap</em>, either for simplicity or for lack of motivation to dig deeper.</p>

<blockquote><p>Integration testing trap is <em>When a developer confuses integration test(or E2E) with unit test, and vice versa</em>. The success of a balanced test coverage identifies sooner the king of tests adequate for a given context, what percentage of each kind of tests.</p></blockquote>

<p>For a test to be a unit test in route testing context, there will be – Focus to test code block(function, class, etc), not the output of a route – Mock requests to third party systems(Payment Gateway, Email Systems, etc) – Mock database read/write operations – Test worst-case scenario such as missing data and data-structure</p>

<blockquote><p>There is a chapter dedicated to testing models in the book. There is also this article <a href="./testing-expressjs-routes">“Testing <code>expressjs</code> Routes”</a> on this blog that gives more insight on the subject. In the blogosphere – <a href="https://semaphoreci.com/community/tutorials/a-tdd-approach-to-building-a-todo-api-using-node-js-and-mongodb">A TDD approach to building a todo API using <code>nodejs</code> and <code>mongodb</code></a> – Marcus on <code>supertest</code> ~ <a href="https://www.marcusoft.net/2014/02/mnb-supertest.html">Marcus Soft Blog</a></p></blockquote>

<h2 id="testing-controllers" id="testing-controllers">Testing controllers</h2>

<p>When modularizing route handlers, there is a realization that they may also be grouped into a layer of their own, or event classes. In MVC jargon, this layer is also known as the controller layer.</p>

<p>Challenges testing controllers, by no surprise, are the same when testing <code>expressjs</code> route handlers. The controller layer thrives when there is a service layer. Mocking database read/write operations, or service layers, that is not core/critical to validation of the controller&#39;s expectations are some of such challenges.</p>

<p>Mocking controller request/response objects, and when necessary, some middleware functions.</p>

<blockquote><p>There is a chapter dedicated to testing controllers in the book. There is also this article <a href="./testing-nodejs-controllers-with-expressjs-framework">Testing <code>nodejs</code> controllers with <code>expressjs</code> framework</a> on this blog that gives more insight on the subject. In the blogosphere, – This article covers Mocking Responses, etc — <a href="https://www.terlici.com/2015/09/21/node-express-controller-testing.html">How to test express controllers</a>.</p></blockquote>

<h2 id="testing-services" id="testing-services">Testing services</h2>

<p>There are some instances where adding a service layer makes sense.</p>

<p>One of those instances is when an application has a collection of single functions under utility(utils). Chances are some of the functions under the utility umbrella may be related in terms of features, the functionality they offer, or both. Such functions are good to use case to be grouped under a class: service</p>

<p>Another good example is for applications that heavily use the model. Chances are the same functions can be re-used in multiple instances, and fixing an issue involves multiple places to fix as well. When that is the case, such functions can be grouped under one banner, in such a way that an update to one function, gets reflected in every instance where the function has been used.</p>

<p>From these two use cases, the testing service has no <em>one-size fit-all</em> testing strategy. Every case of service should be dealt with depending on the context it is operating in.</p>

<blockquote><p>There is a chapter dedicated to testing services in the book. In the blogosphere, – <em>“Building Structured Backends with <code>nodejs</code> and HexNut”</em> by Francis Stokes ~ aka @fstokesman on Twitter <em><a href="https://itnext.io/build-structured-web-socket-backends-in-node-with-hexnut-1d505c9c30b0">source ...</a></em></p></blockquote>

<h2 id="testing-middleware" id="testing-middleware">Testing middleware</h2>

<p>The middleware in a sense are hooks that intercept, process and forward the result to the rest of the route in the <code>expressjs</code> (<code>connectjs</code>) jargon. It is by no surprise that testing middleware shares the same challenges as testing route handlers and controllers.</p>

<blockquote><p>There is a chapter dedicated to testing middleware in the book. There is also this article <a href="./testing-expressjs-middleware">“Testing <code>expressjs</code> Middleware”</a> on this blog that gives more insight on the subject. In the blogosphere, – <a href="https://www.terlici.com/2015/09/21/node-expressjs-controller-testing.html">How to test <code>expressjs</code> controllers</a></p></blockquote>

<h2 id="testing-asynchronous-code" id="testing-asynchronous-code">Testing asynchronous code</h2>

<p>Asynchronous code is a wide subject in <code>nodejs</code> community. Things ranging from regular callbacks, promises, async/await constructs, streams, and event streams(reactive) are all under an asynchronous umbrella.</p>

<p>Challenges associated with asynchronous testing, depending on the use case and context at hand. However, there are striking similarities say, testing testing <code>async/await</code> versus a promise.</p>

<p>When an object is available, it makes sense to get a hold on it, execute assertions once it resolves. That is feasible for promises, streams, async/await construct. However, when the object is some kind of event, then the hold on the object can be used to add a listener and assert once the listener is resolved.</p>

<blockquote><p>There are multiple chapters dedicated to testing asynchronous code in the book. There are also multiple article on this blog that gives more insight on the subject such as – <a href="./how-to-stub-a-stream-function">“How to stub a <code>stream</code> function”</a> – <a href="./how-to-stub-promise-function-and-mock-resolved-output">“How to Stub Promise Function and Mock Resolved Output”</a> – <a href="./testing-nodejs-streams">“Testing <code>nodejs</code> streams”</a>. In the blogosphere, – []()</p></blockquote>

<h2 id="testing-models" id="testing-models">Testing models</h2>

<blockquote><p>testing models goes hand in hand with mocking database access functions</p></blockquote>

<p>Functions that access or change database state can be replaced by spy fakes, custom function replacements capable to supply|emulate similar results as replaced functions.</p>

<p><code>sinon</code> may not make unanimity, but is a feature-complete battle-tested test double library, amongst many others to choose from.</p>

<blockquote><p>There is a chapter dedicated to testing models in the book. There is also this article []() on this blog that gives more insight on the subject. In the blogosphere, – <a href="https://codeutopia.net/blog/2016/06/10/mongoose-models-and-unit-tests-the-definitive-guide/">Mocking/Stubbing/Spying mongoose models</a> – <a href="https://stackoverflow.com/a/11567859/132610">stubbing mongoose model question and answers on StackOverflow</a> – Mocking database calls by wrapping <code>mongoose</code> with <a href="https://github.com/mfncooper/mockery"><code>mockgoose</code></a></p></blockquote>

<h2 id="testing-websockets" id="testing-websockets">Testing WebSockets</h2>

<p>Some of the challenges testing WebSockets can be summarized as trying to simulate: – sending and receiving a message on the <code>WebSocket</code> endpoint.</p>

<blockquote><p>There is a chapter dedicated to testing WebSockets in the book. There is also this <a href="./testing-nodejs-websocket-endpoints">article on this blog that can give more ideas on how to go about testing WebSocket endpoints</a> — another one on <a href="./integration-of-websockets-in-nodejs-application">how to integrate WebSockets with <code>nodejs</code></a>. Elsewhere in the blogosphere, – <a href="https://liamkaufman.com/blog/2012/01/28/testing-socketio-with-mocha-should-and-socketio-client/">Testing <code>socket.io</code> with <code>mocha</code>, <code>should.js</code> and <code>socket.io</code> client</a> – <a href="https://stackoverflow.com/questions/25532692/how-to-share-sessions-with-socket-io-1-x-and-express-4-x">sharing session between <code>expressjs</code> and <code>socket.io</code></a></p></blockquote>

<h2 id="testing-background-jobs" id="testing-background-jobs">Testing background jobs</h2>

<p>The background jobs bring batch processing to the <code>nodejs</code> ecosystem. Background jobs constitute a special use case of asynchronous communication that spans time and processes on which the system is running on.</p>

<p>Testing this kind of complex construct, require distilling the fundamental work done by each function/construct, by focusing on the signal without losing the big picture. It requires quite a paradigm shift(<em>word used with reservation</em>).</p>

<blockquote><p>There is a chapter dedicated to testing background jobs in the book. There is an article <a href="./testing-nodejs-streams">Testing <code>nodejs</code> streams</a> on this blog that gives more insight on the subject. In the blogosphere, – Mocking/Stubbing/Spying <code>mongoose</code> models ~ <a href="https://codeutopia.net/blog/2016/06/10/mongoose-models-and-unit-tests-the-definitive-guide/">CodeUtopia Blog</a></p></blockquote>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>Some source code samples came from QA sites such as StackOverflow, hackers <em>gists</em>, Github documentation, developer blogs, and from my personal projects.</p>

<p>There are some aspects of the ecosystem that are not mentioned, not because they are not important, but because mentioning all of them can fit into a book.</p>

<p>In this article, we highlighted what it takes to test various layers, at the same time make a difference between BDD/TDD testing schools. There are additional complimentary materials in the <strong>“Testing <code>nodejs</code> applications”</strong> book.</p>

<h2 id="references" id="references">References</h2>
<ul><li><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></li>
<li>Testing MEAN stack with Mocha ~ <a href="https://thewayofcode.wordpress.com/2013/04/21/how-to-build-and-test-rest-api-with-nodejs-express-mocha/">The Way of Code</a>    ~ “How to build and test REST with <code>nodejs</code> Express Mocha”</li></ul>

<p><a href="https://getsimple.works/tag:snippets" class="hashtag"><span>#</span><span class="p-category">snippets</span></a> <a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:testing" class="hashtag"><span>#</span><span class="p-category">testing</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:bdd" class="hashtag"><span>#</span><span class="p-category">bdd</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/overview-on-testing-nodejs-applications</guid>
      <pubDate>Thu, 17 Jun 2021 04:34:31 +0000</pubDate>
    </item>
    <item>
      <title>Testing expressjs routes without spinning up a server.</title>
      <link>https://getsimple.works/testing-expressjs-routes-without-spinning-up-a-server-y33d?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete. &#xA;&#xA;In this article we will talk about: &#xA;&#xA;Mocking the Server instance  &#xA;Mocking Route&#39;s Request/Response objects&#xA;Modularization of routes and revealing server instance&#xA;Auto reload(hot reload) using:nodemon, supervisor or forever&#xA;&#xA;  Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book.  Testing nodejs Applications Book Cover&#xA;&#xA;Show me the code&#xA;&#xA;//&#xA;var User = require(&#39;./models&#39;).User; &#xA;module.exports = function getProfile(req, res, next){&#xA;  User.findById(req.params.id, function(error, user){&#xA;    if(error) return next(error);&#xA;    return res.status(200).json(user);&#xA;  });&#xA;};&#xA;&#xA;//Router that Authentication Middleware&#xA;var router = require(&#39;express&#39;).Router();&#xA;var authenticated = require(&#39;./middleware/authenticated&#39;);&#xA;var getUsers = require(&#39;./users/get-user&#39;);&#xA;router.get(&#39;/users/:id&#39;, authenticated, getUser);&#xA;module.exports = router;&#xA;&#xA;What can possibly go wrong?&#xA;&#xA;When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:&#xA;&#xA;Routes should be served at any time while testing &#xA;Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc) &#xA;Mocking request/response objects to wipe need of a server out of the picture&#xA;&#xA;Testing routes without spinning up a server&#xA;&#xA;The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet. &#xA;&#xA;var app = require(&#39;express&#39;).express(),&#xA;  request = require(&#39;./support/http&#39;);&#xA;&#xA;describe(&#39;req .route&#39;, function(){&#xA;  it(&#39;should serve on route /user/:id/edit&#39;, function(done){&#xA;    app.get(&#39;/user/:id/edit&#39;, function(req, res){&#xA;      expect(req.route.path).to.equal(&#39;/user/:id/edit&#39;);&#xA;      res.end();&#xA;    });&#xA;&#xA;    request(app)&#xA;      .get(&#39;/user/12/edit&#39;)&#xA;      .expect(200, done);&#xA;  });&#xA;  it(&#39;should serve get requests&#39;, function(done){&#xA;    app.get(&#39;/user/:id/edit&#39;, function(req, res){&#xA;      expect(req.route.method).to.equal(&#39;get&#39;);&#xA;      res.end();&#xA;    });&#xA;&#xA;    request(app)&#xA;    .get(&#39;/user/12/edit&#39;)&#xA;    .expect(200, done);&#xA;  });&#xA;});&#xA;Example:&#xA;&#xA;  example from so and supertest. supertest spins up a server if necessary. In case we don&#39;t want to have a server, then an alternative dupertest can be a reasonable alternative. request = require(&#39;./support/http&#39;) is the utility that may use either of those two libraries to provide a request. &#xA;&#xA;Choosing tools &#xA;&#xA;  If you haven&#39;t already, reading &#34;How to choose the right tools&#34; blog post gives insights on a framework we used to choose the tools we suggest in this blog. &#xA;&#xA;Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server: &#xA;&#xA;There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes. &#xA;There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.&#xA;supertest,  nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).  &#xA;&#xA;Workflow&#xA;&#xA;  If you haven&#39;t already, read the &#34;How to write test cases developers will love&#34;&#xA;&#xA;In package.json at &#34;test&#34; - add next line&#xA;  &#34;istanbul test mocha -- --color --reporter mocha-lcov-reporter specs&#34;&#xA;OR &#34;nyc test mocha -- --color --reporter mocha-lcov-reporter specs&#34;&#xA;&#xA;Then run the tests using &#xA;$ npm test --coverage &#xA;Example: istanbul generates reports as tests progress&#xA;&#xA;Conclusion&#xA;&#xA;To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer. &#xA;&#xA;Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities. &#xA;&#xA;Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the &#34;Testing nodejs applications&#34; book. &#xA;&#xA;References&#xA;&#xA;Testing nodejs Applications book&#xA;A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials&#xA;&#34;How to build and test REST with nodejs Express Mocha&#34; ~  The Way of Code&#xA;&#xA;#tdd #testing #nodejs #expressjs #server]]&gt;</description>
      <content:encoded><![CDATA[<p>In most integration and <em>end-to-end</em> routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.</p>

<p><strong><em>In this article we will talk about:</em></strong></p>
<ul><li>Mocking the Server instance<br/></li>
<li>Mocking Route&#39;s Request/Response objects</li>
<li>Modularization of routes and revealing server instance</li>
<li>Auto reload(hot reload) using:<code>nodemon</code>, <code>supervisor</code> or <code>forever</code></li></ul>

<blockquote><p>Even though this blog post was designed to offer complementary materials to those who bought my <strong><em><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>, the content can help any software developer to tuneup working environment. <strong><em><a href="https://bit.ly/2ZFJytb">You use this link to buy the book</a></em></strong>.  <a href="https://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing nodejs Applications Book Cover"/></a></p></blockquote>

<h2 id="show-me-the-code" id="show-me-the-code">Show me the code</h2>

<pre><code class="language-JavaScript">//
var User = require(&#39;./models&#39;).User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require(&#39;express&#39;).Router();
var authenticated = require(&#39;./middleware/authenticated&#39;);
var getUsers = require(&#39;./users/get-user&#39;);
router.get(&#39;/users/:id&#39;, authenticated, getUser);
module.exports = router;

</code></pre>

<h2 id="what-can-possibly-go-wrong" id="what-can-possibly-go-wrong">What can possibly go wrong?</h2>

<p>When trying to figure out how to approach testing <code>expressjs</code> routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:</p>
<ul><li>Routes should be served at any time while testing</li>
<li>Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)</li>
<li>Mocking request/response objects to wipe need of a server out of the picture</li></ul>

<h2 id="testing-routes-without-spinning-up-a-server" id="testing-routes-without-spinning-up-a-server">Testing routes without spinning up a server</h2>

<p>The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.</p>

<pre><code class="language-JavaScript">
var app = require(&#39;express&#39;).express(),
  request = require(&#39;./support/http&#39;);

describe(&#39;req .route&#39;, function(){
  it(&#39;should serve on route /user/:id/edit&#39;, function(done){
    app.get(&#39;/user/:id/edit&#39;, function(req, res){
      expect(req.route.path).to.equal(&#39;/user/:id/edit&#39;);
      res.end();
    });

    request(app)
      .get(&#39;/user/12/edit&#39;)
      .expect(200, done);
  });
  it(&#39;should serve get requests&#39;, function(done){
    app.get(&#39;/user/:id/edit&#39;, function(req, res){
      expect(req.route.method).to.equal(&#39;get&#39;);
      res.end();
    });

    request(app)
    .get(&#39;/user/12/edit&#39;)
    .expect(200, done);
  });
});
</code></pre>

<p><em><em>Example</em>:</em></p>

<blockquote><p>example from <a href="https://stackoverflow.com/a/14703801/132610">so</a> and <a href="https://github.com/visionmedia/express/tree/master/test"><code>supertest</code></a>. <code>supertest</code> spins up a server if necessary. In case we don&#39;t want to have a server, then an alternative <a href="https://www.npmjs.com/package/dupertest"><code>dupertest</code></a> can be a reasonable alternative. <code>request = require(&#39;./support/http&#39;)</code> is the utility that may use either of those two libraries to provide a request.</p></blockquote>

<h2 id="choosing-tools" id="choosing-tools">Choosing tools</h2>

<blockquote><p>If you haven&#39;t already, reading <a href="./how-to-choose-the-right-tools.md">“How to choose the right tools”</a> blog post gives insights on a framework we used to choose the tools we suggest in this blog.</p></blockquote>

<p>Following our own <em>Choosing the right tools</em> framework, we suggest adopting the following tools, when testing <code>expressjs</code> routes by mocking out the server:</p>
<ul><li>There exists well respected such as <code>jasmine</code>(<code>jasmine-node</code>), <code>ava</code>, <code>jest</code> in the wild. <code>mocha</code> can just do fine for example sakes.</li>
<li>There is also code instrumentation tools in the wild. <code>mocha</code> integrates well with <code>istanbul</code> test coverage and reporting library.</li>
<li><code>supertest</code>,  <code>nock</code> and <code>dupertest</code> are framework for mocking mocking HTTP, whereas <code>nock</code> intercepts requests. <code>dupertest</code> responds better to our demands(not spinning up a server).<br/></li></ul>

<h2 id="workflow" id="workflow">Workflow</h2>

<blockquote><p>If you haven&#39;t already, read the <a href="./how-to-write-test-cases-developers-will-love.md">“How to write test cases developers will love”</a></p></blockquote>

<pre><code class="language-shell"># In package.json at &#34;test&#34; - add next line
&gt; &#34;istanbul test mocha -- --color --reporter mocha-lcov-reporter specs&#34;
# OR &#34;nyc test mocha -- --color --reporter mocha-lcov-reporter specs&#34;

# Then run the tests using 
$ npm test --coverage 
</code></pre>

<p><em><em>Example</em>: <code>istanbul</code> generates reports as tests progress</em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.</p>

<p>Testing <code>nodejs</code> routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.</p>

<p>Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the <strong>“Testing <code>nodejs</code> applications”</strong> book.</p>

<h3 id="references" id="references">References</h3>
<ul><li><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></li>
<li>A TDD Approach to Building a Todo API Using <code>nodejs</code> and <code>mongodb</code> ~ <a href="https://semaphoreci.com/community/tutorials/a-tdd-approach-to-building-a-todo-api-using-node-js-and-mongodb">SemaphoreCI Community Tutorials</a></li>
<li><em>“How to build and test REST with <code>nodejs</code> Express Mocha”</em> ~  <a href="https://thewayofcode.wordpress.com/2013/04/21/how-to-build-and-test-rest-api-with-nodejs-express-mocha/">The Way of Code</a></li></ul>

<p><a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:testing" class="hashtag"><span>#</span><span class="p-category">testing</span></a> <a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:expressjs" class="hashtag"><span>#</span><span class="p-category">expressjs</span></a> <a href="https://getsimple.works/tag:server" class="hashtag"><span>#</span><span class="p-category">server</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/testing-expressjs-routes-without-spinning-up-a-server-y33d</guid>
      <pubDate>Thu, 17 Jun 2021 04:13:53 +0000</pubDate>
    </item>
    <item>
      <title>Testing expressjs routes without spinning up a server.</title>
      <link>https://getsimple.works/testing-expressjs-routes-without-spinning-up-a-server?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[In most integration and end-to-end routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete. &#xA;&#xA;In this article we will talk about: &#xA;&#xA;Mocking the Server instance  &#xA;Mocking Route&#39;s Request/Response objects&#xA;Modularization of routes and revealing server instance&#xA;Auto reload(hot reload) using:nodemon, supervisor or forever&#xA;&#xA;  Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book.  Testing nodejs Applications Book Cover&#xA;&#xA;Show me the code&#xA;&#xA;//&#xA;var User = require(&#39;./models&#39;).User; &#xA;module.exports = function getProfile(req, res, next){&#xA;  User.findById(req.params.id, function(error, user){&#xA;    if(error) return next(error);&#xA;    return res.status(200).json(user);&#xA;  });&#xA;};&#xA;&#xA;//Router that Authentication Middleware&#xA;var router = require(&#39;express&#39;).Router();&#xA;var authenticated = require(&#39;./middleware/authenticated&#39;);&#xA;var getUsers = require(&#39;./users/get-user&#39;);&#xA;router.get(&#39;/users/:id&#39;, authenticated, getUser);&#xA;module.exports = router;&#xA;&#xA;What can possibly go wrong?&#xA;&#xA;When trying to figure out how to approach testing expressjs routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:&#xA;&#xA;Routes should be served at any time while testing &#xA;Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc) &#xA;Mocking request/response objects to wipe need of a server out of the picture&#xA;&#xA;Testing routes without spinning up a server&#xA;&#xA;The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet. &#xA;&#xA;var app = require(&#39;express&#39;).express(),&#xA;  request = require(&#39;./support/http&#39;);&#xA;&#xA;describe(&#39;req .route&#39;, function(){&#xA;  it(&#39;should serve on route /user/:id/edit&#39;, function(done){&#xA;    app.get(&#39;/user/:id/edit&#39;, function(req, res){&#xA;      expect(req.route.path).to.equal(&#39;/user/:id/edit&#39;);&#xA;      res.end();&#xA;    });&#xA;&#xA;    request(app)&#xA;      .get(&#39;/user/12/edit&#39;)&#xA;      .expect(200, done);&#xA;  });&#xA;  it(&#39;should serve get requests&#39;, function(done){&#xA;    app.get(&#39;/user/:id/edit&#39;, function(req, res){&#xA;      expect(req.route.method).to.equal(&#39;get&#39;);&#xA;      res.end();&#xA;    });&#xA;&#xA;    request(app)&#xA;    .get(&#39;/user/12/edit&#39;)&#xA;    .expect(200, done);&#xA;  });&#xA;});&#xA;Example:&#xA;&#xA;  example from so and supertest. supertest spins up a server if necessary. In case we don&#39;t want to have a server, then an alternative dupertest can be a reasonable alternative. request = require(&#39;./support/http&#39;) is the utility that may use either of those two libraries to provide a request. &#xA;&#xA;Choosing tools &#xA;&#xA;  If you haven&#39;t already, reading &#34;How to choose the right tools&#34; blog post gives insights on a framework we used to choose the tools we suggest in this blog. &#xA;&#xA;Following our own Choosing the right tools framework, we suggest adopting the following tools, when testing expressjs routes by mocking out the server: &#xA;&#xA;There exists well respected such as jasmine(jasmine-node), ava, jest in the wild. mocha can just do fine for example sakes. &#xA;There is also code instrumentation tools in the wild. mocha integrates well with istanbul test coverage and reporting library.&#xA;supertest,  nock and dupertest are framework for mocking mocking HTTP, whereas nock intercepts requests. dupertest responds better to our demands(not spinning up a server).  &#xA;&#xA;Workflow&#xA;&#xA;  If you haven&#39;t already, read the &#34;How to write test cases developers will love&#34;&#xA;&#xA;In package.json at &#34;test&#34; - add next line&#xA;  &#34;istanbul test mocha -- --color --reporter mocha-lcov-reporter specs&#34;&#xA;OR &#34;nyc test mocha -- --color --reporter mocha-lcov-reporter specs&#34;&#xA;&#xA;Then run the tests using &#xA;$ npm test --coverage &#xA;Example: istanbul generates reports as tests progress&#xA;&#xA;Conclusion&#xA;&#xA;To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer. &#xA;&#xA;Testing nodejs routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities. &#xA;&#xA;Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the &#34;Testing nodejs applications&#34; book. &#xA;&#xA;References&#xA;&#xA;Testing nodejs Applications book&#xA;A TDD Approach to Building a Todo API Using nodejs and mongodb ~ SemaphoreCI Community Tutorials&#xA;&#34;How to build and test REST with nodejs Express Mocha&#34; ~  The Way of Code&#xA;&#xA;#tdd #testing #nodejs #expressjs #server]]&gt;</description>
      <content:encoded><![CDATA[<p>In most integration and <em>end-to-end</em> routes testing, a live server may be deemed critical to make reasonable test assertions. A live server is not always a good idea, especially in a sandboxed environment such as a CI environment where opening server ports may be restricted, if not outright prohibited. In this article, we explore the combination of mocking HTTP requests/responses to make use of an actual server obsolete.</p>

<p><strong><em>In this article we will talk about:</em></strong></p>
<ul><li>Mocking the Server instance<br/></li>
<li>Mocking Route&#39;s Request/Response objects</li>
<li>Modularization of routes and revealing server instance</li>
<li>Auto reload(hot reload) using:<code>nodemon</code>, <code>supervisor</code> or <code>forever</code></li></ul>

<blockquote><p>Even though this blog post was designed to offer complementary materials to those who bought my <strong><em><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>, the content can help any software developer to tuneup working environment. <strong><em><a href="https://bit.ly/2ZFJytb">You use this link to buy the book</a></em></strong>.  <a href="https://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing nodejs Applications Book Cover"/></a></p></blockquote>

<h2 id="show-me-the-code" id="show-me-the-code">Show me the code</h2>

<pre><code class="language-JavaScript">//
var User = require(&#39;./models&#39;).User; 
module.exports = function getProfile(req, res, next){
  User.findById(req.params.id, function(error, user){
    if(error) return next(error);
    return res.status(200).json(user);
  });
};

//Router that Authentication Middleware
var router = require(&#39;express&#39;).Router();
var authenticated = require(&#39;./middleware/authenticated&#39;);
var getUsers = require(&#39;./users/get-user&#39;);
router.get(&#39;/users/:id&#39;, authenticated, getUser);
module.exports = router;

</code></pre>

<h2 id="what-can-possibly-go-wrong" id="what-can-possibly-go-wrong">What can possibly go wrong?</h2>

<p>When trying to figure out how to approach testing <code>expressjs</code> routes, the driving force behind falling into the integration testing trap is the need to start a server. the following points may be a challenge:</p>
<ul><li>Routes should be served at any time while testing</li>
<li>Testing in a sandboxed environments restricts server to use(open new ports, serving requests, etc)</li>
<li>Mocking request/response objects to wipe need of a server out of the picture</li></ul>

<h2 id="testing-routes-without-spinning-up-a-server" id="testing-routes-without-spinning-up-a-server">Testing routes without spinning up a server</h2>

<p>The key is mocking request/response objects. A typical REST integration testing shares similarities with the following snippet.</p>

<pre><code class="language-JavaScript">
var app = require(&#39;express&#39;).express(),
  request = require(&#39;./support/http&#39;);

describe(&#39;req .route&#39;, function(){
  it(&#39;should serve on route /user/:id/edit&#39;, function(done){
    app.get(&#39;/user/:id/edit&#39;, function(req, res){
      expect(req.route.path).to.equal(&#39;/user/:id/edit&#39;);
      res.end();
    });

    request(app)
      .get(&#39;/user/12/edit&#39;)
      .expect(200, done);
  });
  it(&#39;should serve get requests&#39;, function(done){
    app.get(&#39;/user/:id/edit&#39;, function(req, res){
      expect(req.route.method).to.equal(&#39;get&#39;);
      res.end();
    });

    request(app)
    .get(&#39;/user/12/edit&#39;)
    .expect(200, done);
  });
});
</code></pre>

<p><em><em>Example</em>:</em></p>

<blockquote><p>example from <a href="https://stackoverflow.com/a/14703801/132610">so</a> and <a href="https://github.com/visionmedia/express/tree/master/test"><code>supertest</code></a>. <code>supertest</code> spins up a server if necessary. In case we don&#39;t want to have a server, then an alternative <a href="https://www.npmjs.com/package/dupertest"><code>dupertest</code></a> can be a reasonable alternative. <code>request = require(&#39;./support/http&#39;)</code> is the utility that may use either of those two libraries to provide a request.</p></blockquote>

<h2 id="choosing-tools" id="choosing-tools">Choosing tools</h2>

<blockquote><p>If you haven&#39;t already, reading <a href="./how-to-choose-the-right-tools.md">“How to choose the right tools”</a> blog post gives insights on a framework we used to choose the tools we suggest in this blog.</p></blockquote>

<p>Following our own <em>Choosing the right tools</em> framework, we suggest adopting the following tools, when testing <code>expressjs</code> routes by mocking out the server:</p>
<ul><li>There exists well respected such as <code>jasmine</code>(<code>jasmine-node</code>), <code>ava</code>, <code>jest</code> in the wild. <code>mocha</code> can just do fine for example sakes.</li>
<li>There is also code instrumentation tools in the wild. <code>mocha</code> integrates well with <code>istanbul</code> test coverage and reporting library.</li>
<li><code>supertest</code>,  <code>nock</code> and <code>dupertest</code> are framework for mocking mocking HTTP, whereas <code>nock</code> intercepts requests. <code>dupertest</code> responds better to our demands(not spinning up a server).<br/></li></ul>

<h2 id="workflow" id="workflow">Workflow</h2>

<blockquote><p>If you haven&#39;t already, read the <a href="./how-to-write-test-cases-developers-will-love.md">“How to write test cases developers will love”</a></p></blockquote>

<pre><code class="language-shell"># In package.json at &#34;test&#34; - add next line
&gt; &#34;istanbul test mocha -- --color --reporter mocha-lcov-reporter specs&#34;
# OR &#34;nyc test mocha -- --color --reporter mocha-lcov-reporter specs&#34;

# Then run the tests using 
$ npm test --coverage 
</code></pre>

<p><em><em>Example</em>: <code>istanbul</code> generates reports as tests progress</em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>To sum up, it pays off to spend extra time writing some tests. Effective tests can be written before, as well as after writing code. The balance should be at the discretion of the developer.</p>

<p>Testing <code>nodejs</code> routes are quite intimidating on the first encounter. This article contributed to shifting fear into opportunities.</p>

<p>Removing the server dependency makes it easy to validate the most common use cases at a lower cost. Writing a good meaningful message is pure art. There are additional complimentary materials in the <strong>“Testing <code>nodejs</code> applications”</strong> book.</p>

<h3 id="references" id="references">References</h3>
<ul><li><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></li>
<li>A TDD Approach to Building a Todo API Using <code>nodejs</code> and <code>mongodb</code> ~ <a href="https://semaphoreci.com/community/tutorials/a-tdd-approach-to-building-a-todo-api-using-node-js-and-mongodb">SemaphoreCI Community Tutorials</a></li>
<li><em>“How to build and test REST with <code>nodejs</code> Express Mocha”</em> ~  <a href="https://thewayofcode.wordpress.com/2013/04/21/how-to-build-and-test-rest-api-with-nodejs-express-mocha/">The Way of Code</a></li></ul>

<p><a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:testing" class="hashtag"><span>#</span><span class="p-category">testing</span></a> <a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:expressjs" class="hashtag"><span>#</span><span class="p-category">expressjs</span></a> <a href="https://getsimple.works/tag:server" class="hashtag"><span>#</span><span class="p-category">server</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/testing-expressjs-routes-without-spinning-up-a-server</guid>
      <pubDate>Thu, 17 Jun 2021 00:33:29 +0000</pubDate>
    </item>
    <item>
      <title>Testing nodejs streams</title>
      <link>https://getsimple.works/testing-nodejs-streams?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[Asynchronous computation model makes nodejs flexible to perform heavy computations while keeping a relatively lower memory footprint. The stream API is one of those computation models, this article explores how to approach testing it. &#xA;&#xA;In this article we will talk about: &#xA;&#xA;Difference between Readable/Writable and Duplex streams &#xA;Testing Writable stream&#xA;Testing Readable stream &#xA;Testing Duplex or Transformer streams &#xA;&#xA;  Even though this blog post was designed to offer complementary materials to those who bought my Testing nodejs Applications book, the content can help any software developer to tuneup working environment. You use this link to buy the book.  Testing nodejs Applications Book Cover &#xA;&#xA;Show me the code&#xA;&#xA;//Read + Transform +Write Stream processing example&#xA;var gzip = require(&#39;zlib&#39;).createGzip(),&#xA;    route = require(&#39;expressjs&#39;).Router(); &#xA;//getter() reads a large file of songs metadata, transform and send back scaled down metadata &#xA;route.get(&#39;/songs&#39; function getter(req, res, next){&#xA;    let rstream = fs.createReadStream(&#39;./several-tb-of-songs.json&#39;); &#xA;    rstream.&#xA;        .pipe(new MetadataStreamTransformer())&#xA;        .pipe(gzip)&#xA;        .pipe(res);&#xA;    // forwaring the error to next handler     &#xA;    rstream.on(&#39;error&#39;, error =  next(error, null));&#xA;});&#xA;&#xA;//Transformer Stream example&#xA;const inherit = require(&#39;util&#39;).inherits,&#xA;    Transform = require(&#39;stream&#39;).Tranform;&#xA;&#xA;function MetadataStreamTransformer(options){&#xA;    if(!(this instanceof MetadataStreamTransformer)){&#xA;        return new MetadataStreamTransformer(options);&#xA;    }&#xA;    // re-enforces object mode chunks&#xA;    this.options = Object.assign({}, options, {objectMode: true});&#xA;    Transform.call(this, this.options);&#xA;}&#xA;&#xA;inherits(MetadataStreamTransformer, Transform);&#xA;MetadataStreamTransformer.prototype.transform = function(chunk, encoding, next){&#xA;    //minimalistic implementation &#xA;    //@todo  process chunk + by adding/removing elements&#xA;    let data = JSON.parse(typeof chunk === &#39;string&#39; ? chunk : chunk.toString(&#39;utf8&#39;));&#xA;    this.push({id: (data || {}).id || random() });&#xA;    if(typeof next === &#39;function&#39;) next();&#xA;};&#xA;&#xA;MetadataStreamTransformer.prototype.flush = function(next) {&#xA;    this.push(null);//tells that operation is over &#xA;    if(typeof next === &#39;function&#39;) {next();}&#xA;};&#xA;&#xA;  The example above provides a clear picture of the context in which Readable, Writable, and Duplex(Transform) streams can be used.&#xA;&#xA;What can possibly go wrong?&#xA;&#xA;Streams are particularly hard to test because of their asynchronous nature. That is not an exception for I/O on the filesystem or third-party endpoints. It is easy to fall into the integration testing trap when testing nodejs streams. &#xA;&#xA;Among other things, the following are challenges we may expect when (unit) test streams: &#xA;&#xA;Identify areas where it makes sense to stub&#xA;Choosing the right mock object output to feed into stubs&#xA;Mock streams read/transform/write operations&#xA;&#xA;  There is an article dedicated to stubbing stream functions. Mocking in our case will not go into details about the stubbing parts in the current text. &#xA;&#xA;Choosing tools &#xA;&#xA;  If you haven&#39;t already, reading &#34;How to choose the right tools&#34; blog post gives insights on a framework we used to choose the tools we suggest in this blog. &#xA;&#xA;Following our own &#34;Choosing the right tools&#34; framework. They are not a suggestion, rather the ones that made sense to complete this article: &#xA;&#xA;We can choose amongst a myriad of test runners, for instance, jasmine(jasmine-node), ava or jest. mocha was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete. &#xA;The stack mocha, chai, and sinon (assertion and test doubles libraries) worth a shot.  &#xA;node-mocks-http framework for mocking HTTP Request/Response objects. &#xA;Code under test is instrumented to make test progress possible. Test coverage reporting we adopted, also widely adopted by the mocha community, is istanbul. &#xA;&#xA;Workflow&#xA;&#xA;It is possible to generate reports as tests progress. &#xA;&#xA;  latest versions of istanbul uses the nyc name.&#xA;&#xA;In package.json at &#34;test&#34; - add next line&#xA;  &#34;istanbul test mocha -- --color --reporter mocha-lcov-reporter specs&#34;&#xA;&#xA;Then run the tests using &#xA;$ npm test --coverage &#xA;&#xA;Show me the tests &#xA;&#xA;  If you haven&#39;t already, read the &#34;How to write test cases developers will love&#34;&#xA;&#xA;We assume we approach testing of fairly large nodejs application from a real-world perspective, and with refactoring in mind. The good way to think about large scale is to focus on smaller things and how they integrate(expand) with the rest of the application. &#xA;&#xA;The philosophy about test-driven development is to write failing tests, followed by code that resolves the failing use cases, refactor rinse and repeat. Most real-world, writing tests may start at any given moment depending on multiple variables one of which being the pressure and timeline of the project at hand. &#xA;&#xA;It is not a new concept for some tests being written after the fact (characterization tests). Another case is when dealing with legacy code, or simply ill-tested code base. That is the case we are dealing with in our code sample use case. &#xA;&#xA;The first thing is rather reading the code and identify areas of improvement before we start writing the code. And the clear improvement opportunity is to eject the function getter() out of the router. Our new construct looks as the following: route.get(&#39;/songs&#39;, getter); which allows to test getter() in isolation. &#xA;&#xA;Our skeleton looks a bit as in the following lines. &#xA;&#xA;describe(&#39;getter()&#39;, () =  {&#xA;  let req, res, next, error;&#xA;  beforeEach(() =  {&#xA;    next = sinon.spy();&#xA;    sessionObject = { ... };//mocking session object&#xA;    req = { params: {id: 1234}, user: sessionObject };&#xA;    res = { status: (code) =  { json: sinon.spy() }}&#xA;  });&#xA;    //...&#xA;});&#xA;&#xA;Let&#39;s examine the case where the stream is actually going to fail. &#xA;&#xA;  Note that we lack a way to get the handle on the stream object, as the handler does not return any object to tap into. Luckily, the response and request objects are both instances of streams. So a good mocking can come to our rescue. &#xA;&#xA;//...&#xA;let eventEmitter = require(&#39;events&#39;).EventEmitter,&#xA;  httpMock = require(&#39;node-mocks-http&#39;),&#xA;&#xA;//...&#xA;it(&#39;fails when no songs are found&#39;, done =  {&#xA;    var self = this; &#xA;    this.next = sinon.spy();&#xA;    this.req = httpMock.createRequest({method, url, body})&#xA;    this.res = httpMock.createResponse({eventEmitter: eventEmitter})&#xA;    &#xA;    getter(this.req, this.res, this.next);&#xA;    this.res.on(&#39;error&#39;, function(error){&#xA;        assert(self.next.called, &#39;next() has been called&#39;);&#xA;        done(error);&#xA;    });&#xA;});&#xA;&#xA;Mocking both request and response objects in our context makes more sense. Likewise, we will mock response cases of success, the reader stream&#39;s fs.createReadStream() has to be stubbed and make it eject a stream of fake content. this time, this.res.on(&#39;end&#39;) will be used to make assertions. &#xA;&#xA;Conclusion&#xA;&#xA;Automated testing streams are quite intimidating for newbies and veterans alike. There are multiple enough use cases in the book to get you past that mark.  &#xA;&#xA;In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing streams is particularly challenging especially when a read/write is involved. There are additional complimentary materials in the &#34;Testing nodejs applications&#34; book. &#xA;&#xA;References&#xA;&#xA;Testing nodejs Applications book&#xA;&#xA;#snippets #tdd #streams #nodejs #mocking]]&gt;</description>
      <content:encoded><![CDATA[<p>Asynchronous computation model makes <code>nodejs</code> flexible to perform heavy computations while keeping a relatively lower memory footprint. The stream API is one of those computation models, this article explores how to approach testing it.</p>

<p><strong><em>In this article we will talk about:</em></strong></p>
<ul><li>Difference between Readable/Writable and Duplex streams</li>
<li>Testing Writable stream</li>
<li>Testing Readable stream</li>
<li>Testing Duplex or Transformer streams</li></ul>

<blockquote><p>Even though this blog post was designed to offer complementary materials to those who bought my <strong><em><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>, the content can help any software developer to tuneup working environment. <strong><em><a href="https://bit.ly/2ZFJytb">You use this link to buy the book</a></em></strong>.  <a href="https://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing nodejs Applications Book Cover"/></a></p></blockquote>

<h2 id="show-me-the-code" id="show-me-the-code">Show me the code</h2>

<pre><code class="language-JavaScript">//Read + Transform +Write Stream processing example
var gzip = require(&#39;zlib&#39;).createGzip(),
    route = require(&#39;expressjs&#39;).Router(); 
//getter() reads a large file of songs metadata, transform and send back scaled down metadata 
route.get(&#39;/songs&#39; function getter(req, res, next){
    let rstream = fs.createReadStream(&#39;./several-tb-of-songs.json&#39;); 
    rstream.
        .pipe(new MetadataStreamTransformer())
        .pipe(gzip)
        .pipe(res);
    // forwaring the error to next handler     
    rstream.on(&#39;error&#39;, error =&gt; next(error, null));
});

//Transformer Stream example
const inherit = require(&#39;util&#39;).inherits,
    Transform = require(&#39;stream&#39;).Tranform;

function MetadataStreamTransformer(options){
    if(!(this instanceof MetadataStreamTransformer)){
        return new MetadataStreamTransformer(options);
    }
    // re-enforces object mode chunks
    this.options = Object.assign({}, options, {objectMode: true});
    Transform.call(this, this.options);
}

inherits(MetadataStreamTransformer, Transform);
MetadataStreamTransformer.prototype._transform = function(chunk, encoding, next){
    //minimalistic implementation 
    //@todo  process chunk + by adding/removing elements
    let data = JSON.parse(typeof chunk === &#39;string&#39; ? chunk : chunk.toString(&#39;utf8&#39;));
    this.push({id: (data || {}).id || random() });
    if(typeof next === &#39;function&#39;) next();
};

MetadataStreamTransformer.prototype._flush = function(next) {
    this.push(null);//tells that operation is over 
    if(typeof next === &#39;function&#39;) {next();}
};

</code></pre>

<blockquote><p>The example above provides a clear picture of the context in which Readable, Writable, and Duplex(Transform) streams can be used.</p></blockquote>

<h2 id="what-can-possibly-go-wrong" id="what-can-possibly-go-wrong">What can possibly go wrong?</h2>

<p>Streams are particularly hard to test because of their asynchronous nature. That is not an exception for I/O on the filesystem or third-party endpoints. It is easy to fall into the integration testing trap when testing <code>nodejs</code> streams.</p>

<p>Among other things, the following are challenges we may expect when (unit) test streams:</p>
<ul><li>Identify areas where it makes sense to stub</li>
<li>Choosing the right mock object output to feed into stubs</li>
<li>Mock streams read/transform/write operations</li></ul>

<blockquote><p>There is an article dedicated to <a href="./how-to-stub-a-stream-function">stubbing <code>stream</code> functions</a>. Mocking in our case will not go into details about the stubbing parts in the current text.</p></blockquote>

<h2 id="choosing-tools" id="choosing-tools">Choosing tools</h2>

<blockquote><p>If you haven&#39;t already, reading <a href="./how-to-choose-the-right-tools.md">“How to choose the right tools”</a> blog post gives insights on a framework we used to choose the tools we suggest in this blog.</p></blockquote>

<p>Following our own <em><a href="./how-to-choose-the-right-tools.md">“Choosing the right tools”</a></em> framework. They are not a suggestion, rather the ones that made sense to complete this article:</p>
<ul><li>We can choose amongst a myriad of test runners, for instance, <code>jasmine</code>(<code>jasmine-node</code>), <code>ava</code> or <code>jest</code>. <code>mocha</code> was appealing in the context of this writeup, but choosing any other test runner does not make this article obsolete.</li>
<li>The stack <code>mocha</code>, <code>chai</code>, and <code>sinon</code> (assertion and test doubles libraries) worth a shot.<br/></li>
<li><code>node-mocks-http</code> framework for mocking HTTP Request/Response objects.</li>
<li>Code under test is instrumented to make test progress possible. Test coverage reporting we adopted, also widely adopted by the <code>mocha</code> community, is <code>istanbul</code>.</li></ul>

<h2 id="workflow" id="workflow">Workflow</h2>

<p>It is possible to generate reports as tests progress.</p>

<blockquote><p>latest versions of <code>istanbul</code> uses the <code>nyc</code> name.</p></blockquote>

<pre><code class="language-shell"># In package.json at &#34;test&#34; - add next line
&gt; &#34;istanbul test mocha -- --color --reporter mocha-lcov-reporter specs&#34;

# Then run the tests using 
$ npm test --coverage 
</code></pre>

<h2 id="show-me-the-tests" id="show-me-the-tests">Show me the tests</h2>

<blockquote><p>If you haven&#39;t already, read the <a href="./how-to-write-test-cases-developers-will-love.md">“How to write test cases developers will love”</a></p></blockquote>

<p>We assume we approach testing of fairly large <code>nodejs</code> application from a real-world perspective, and with refactoring in mind. The good way to think about large scale is to focus on smaller things and how they integrate(expand) with the rest of the application.</p>

<p>The philosophy about test-driven development is to write failing tests, followed by code that resolves the failing use cases, refactor rinse and repeat. Most real-world, writing tests may start at any given moment depending on multiple variables one of which being the pressure and timeline of the project at hand.</p>

<p>It is not a new concept for some tests being written after the fact <em>(characterization tests)</em>. Another case is when dealing with legacy code, or simply ill-tested code base. That is the case we are dealing with in our code sample use case.</p>

<p>The first thing is rather reading the code and identify areas of improvement before we start writing the code. And the clear improvement opportunity is to eject the function <code>getter()</code> out of the router. Our new construct looks as the following: <code>route.get(&#39;/songs&#39;, getter);</code> which allows to test <code>getter()</code> in isolation.</p>

<p>Our skeleton looks a bit as in the following lines.</p>

<pre><code class="language-JavaScript">describe(&#39;getter()&#39;, () =&gt; {
  let req, res, next, error;
  beforeEach(() =&gt; {
    next = sinon.spy();
    sessionObject = { ... };//mocking session object
    req = { params: {id: 1234}, user: sessionObject };
    res = { status: (code) =&gt; { json: sinon.spy() }}
  });
    //...
});
</code></pre>

<p>Let&#39;s examine the case where the stream is actually going to fail.</p>

<blockquote><p>Note that we lack a way to get the handle on the stream object, as the handler does not return any object to tap into. Luckily, the response and request objects are both instances of streams. So a good mocking can come to our rescue.</p></blockquote>

<pre><code class="language-JavaScript">
//...
let eventEmitter = require(&#39;events&#39;).EventEmitter,
  httpMock = require(&#39;node-mocks-http&#39;),

//...
it(&#39;fails when no songs are found&#39;, done =&gt; {
    var self = this; 
    this.next = sinon.spy();
    this.req = httpMock.createRequest({method, url, body})
    this.res = httpMock.createResponse({eventEmitter: eventEmitter})
    
    getter(this.req, this.res, this.next);
    this.res.on(&#39;error&#39;, function(error){
        assert(self.next.called, &#39;next() has been called&#39;);
        done(error);
    });
});

</code></pre>

<p>Mocking both request and response objects in our context makes more sense. Likewise, we will mock response cases of success, the reader stream&#39;s <code>fs.createReadStream()</code> has to be stubbed and make it eject a stream of fake content. this time, <code>this.res.on(&#39;end&#39;)</code> will be used to make assertions.</p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>Automated testing streams are quite intimidating for newbies and veterans alike. There are multiple enough use cases <em><a href="https://bit.ly/2ZFJytb">in the book</a></em> to get you past that mark.</p>

<p>In this article, we reviewed how testing tends to be more of art, than science. We also stressed the fact that, like in any art, practice makes perfect ~ testing streams is particularly challenging especially when a read/write is involved. There are additional complimentary materials in the <strong>“Testing <code>nodejs</code> applications”</strong> book.</p>

<h2 id="references" id="references">References</h2>
<ul><li><a href="https://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></li></ul>

<p><a href="https://getsimple.works/tag:snippets" class="hashtag"><span>#</span><span class="p-category">snippets</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:streams" class="hashtag"><span>#</span><span class="p-category">streams</span></a> <a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:mocking" class="hashtag"><span>#</span><span class="p-category">mocking</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/testing-nodejs-streams</guid>
      <pubDate>Wed, 16 Jun 2021 23:46:04 +0000</pubDate>
    </item>
    <item>
      <title>How to install upstart</title>
      <link>https://getsimple.works/how-to-install-upstart?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This article revisits essentials on how to install upstart an event based daemon for starting/stopping tasks on development and production servers.&#xA;&#xA;  This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  Testing Nodejs Applications Book Cover&#xA;You can grab a copy of this book on this link&#xA;&#xA;There are a plethora of task execution solutions, for instance systemd and init, rather complex to work with. That makes upstart a good alternative to such tools. &#xA;&#xA;In this article you will learn about:&#xA;&#xA;-- &#xA;&#xA;Tools available for task execution &#xA;How to install upstart task execution&#xA;How to write basic upstart task  &#xA;&#xA;Installing upstart on Linux &#xA;&#xA;It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:&#xA;&#xA;$ apt-get update # Fetch list of available updates&#xA;$ apt-get upgrade # Upgrades current packages&#xA;$ apt-get dist-upgrade # Installs only new updates&#xA;Example: updating aptitude binaries&#xA;&#xA;At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.&#xA;&#xA;Installing a upstart on Linux using apt&#xA;&#xA;Installing upstart on macOS &#xA; &#xA;upstart is a utility designed mainly for Linux systems. However, macOS has its equivalent, launchctl designed to stop/stop processes prior/after the system restarts.   &#xA;&#xA;Installing upstart on a Windows machine&#xA;&#xA;Whereas macOS  systems and Linux are quite relax when it comes to working with system processes, Windows is a beast on its own way. upstart was built for nix systems but  there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing. &#xA;&#xA;Automated upgrades &#xA;&#xA;Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades. &#xA;&#xA;Following the SemVer ~ aka Semantic Versioning standard, it is recommended that the only pair minor versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches do not introduce breaking changes. Those can therefore be automated. &#xA;&#xA;In case of a critical infrastructure piece of processes state management calibre, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Upstart provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.  &#xA;&#xA;  We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches. &#xA;&#xA;In the context of Linux, we will use the unattended-upgrades package to do the work. &#xA;&#xA;$ apt-get install unattended-upgrades apticron&#xA;Example: install unattended-upgrades&#xA;&#xA;Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.&#xA;&#xA;Unattended-Upgrade::Allowed-Origins {&#xA;//  &#34;${distroid}:${distrocodename}&#34;;&#xA;    &#34;${distroid}:${distrocodename}-security&#34;; # upgrading security patches only &#xA;//   &#34;${distroid}:${distrocodename}-updates&#34;;  &#xA;//  &#34;${distroid}:${distrocodename}-proposed&#34;;&#xA;//  &#34;${distroid}:${distrocodename}-backports&#34;;&#xA;};&#xA;&#xA;Unattended-Upgrade::Package-Blacklist {&#xA;    &#34;vim&#34;;&#xA;};&#xA;Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades&#xA;&#xA;The next step is necessary to make sure  unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week. &#xA;&#xA;APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day&#xA;APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day&#xA;APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week&#xA;APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day&#xA;Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades&#xA;&#xA;This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all. &#xA;&#xA;EMAIL=&#34;email@host.tld&#34;&#xA;Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf&#xA;&#xA;Conclusion&#xA;&#xA;In this article we revisited ways to install upstart on various platforms. Even though configuration was beyond the scope of this article*, we managed to get everyday quick refreshers out.&#xA;&#xA;References&#xA;&#xA; An A-Z Index of the Apple macOS command line (macOS bash) and the Apple macOS How-to guides and examples&#xA; Configuring nodejs applications&#xA;&#xA;#nodejs #homebrew #UnattendedUpgrades #nginx #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications]]&gt;</description>
      <content:encoded><![CDATA[<p>This article revisits essentials on how to install <code>upstart</code> an event based daemon for starting/stopping tasks on development and production servers.</p>

<blockquote><p>This article has complementary materials to the <strong><em><a href="http://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  <a href="http://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing Nodejs Applications Book Cover"/></a>
<strong><em><a href="http://bit.ly/2ZFJytb">You can grab a copy of this book on this link</a></em></strong></p></blockquote>

<p>There are a plethora of task execution solutions, for instance <code>systemd</code> and <code>init</code>, rather complex to work with. That makes upstart a good alternative to such tools.</p>

<p><strong>In this article you will learn about:</strong></p>

<p><strong>—</strong></p>
<ul><li>Tools available for task execution</li>
<li>How to install <code>upstart</code> task execution</li>
<li>How to write basic <code>upstart</code> task<br/></li></ul>

<h2 id="installing-upstart-on-linux" id="installing-upstart-on-linux">Installing <code>upstart</code> on Linux</h2>

<p>It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:</p>

<pre><code class="language-shell">$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates
</code></pre>

<p><em><em>Example</em>: updating aptitude binaries</em></p>

<p>At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.</p>

<h3 id="installing-a-upstart-on-linux-using-apt" id="installing-a-upstart-on-linux-using-apt">Installing a <code>upstart</code> on Linux using <code>apt</code></h3>

<h2 id="installing-upstart-on-macos" id="installing-upstart-on-macos">Installing <code>upstart</code> on macOS</h2>

<p><code>upstart</code> is a utility designed mainly for Linux systems. However, macOS has its equivalent, <code>launchctl</code> designed to stop/stop processes prior/after the system restarts.</p>

<h2 id="installing-upstart-on-a-windows-machine" id="installing-upstart-on-a-windows-machine">Installing <code>upstart</code> on a Windows machine</h2>

<p>Whereas macOS  systems and Linux are quite relax when it comes to working with system processes, Windows is a beast on its own way. <code>upstart</code> was built for <code>*nix</code> systems but  there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing.</p>

<h2 id="automated-upgrades" id="automated-upgrades">Automated upgrades</h2>

<p>Before we dive into automatic upgrades, we should consider nuances associated to managing a <code>mongodb</code> instance. The updates fall into two major, quite interesting, categories: <strong><em>patch</em></strong> updates and <strong><em>version upgrades</em></strong>.</p>

<p>Following the <a href="https://semver.org/">SemVer ~ <em>aka Semantic Versioning</em></a> standard, it is recommended that the only pair <strong><em>minor</em></strong> versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches do not introduce breaking changes. Those can therefore be automated.</p>

<p>In case of a critical infrastructure piece of processes state management calibre, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Upstart provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.</p>

<blockquote><p>We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.</p></blockquote>

<p>In the context of Linux, we will use the <strong><em>unattended-upgrades</em></strong> package to do the work.</p>

<pre><code class="language-shell">$ apt-get install unattended-upgrades apticron
</code></pre>

<p><em><em>Example</em>: install unattended-upgrades</em></p>

<p>Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.</p>

<pre><code class="language-shell">Unattended-Upgrade::Allowed-Origins {
//  &#34;${distro_id}:${distro_codename}&#34;;
    &#34;${distro_id}:${distro_codename}-security&#34;; # upgrading security patches only 
//   &#34;${distro_id}:${distro_codename}-updates&#34;;  
//  &#34;${distro_id}:${distro_codename}-proposed&#34;;
//  &#34;${distro_id}:${distro_codename}-backports&#34;;
};

Unattended-Upgrade::Package-Blacklist {
    &#34;vim&#34;;
};
</code></pre>

<p><em><em>Example</em>: fine-tune the blacklist and whitelist in <code>/etc/apt/apt.conf.d/50unattended-upgrades</code></em></p>

<p>The next step is necessary to make sure  <strong><em>unattended-upgrades</em></strong> download, install and cleanups tasks have a default period: once, twice a day or a week.</p>

<pre><code class="language-shell">APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day
</code></pre>

<p><em><em>Example</em>: tuning the tasks parameter <code>/etc/apt/apt.conf.d/20auto-upgrades</code></em></p>

<p>This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool <code>apticron</code> in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.</p>

<pre><code class="language-shell">EMAIL=&#34;&lt;email&gt;@&lt;host.tld&gt;&#34;
</code></pre>

<p><em><em>Example</em>: tuning reporting tasks email parameter <code>/etc/apticron/apticron.conf</code></em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In this article we revisited ways to install <code>upstart</code> on various platforms. Even though <strong><em><a href="https://getsimple.works/how-to-configure-nodejs-applications#configure-upstart-to-start-nodejs-application">configuration was beyond the scope of this article</a></em></strong>, we managed to get everyday quick refreshers out.</p>

<h2 id="references" id="references">References</h2>
<ul><li><em><a href="https://ss64.com/osx/">An A-Z Index of the Apple macOS command line (macOS bash)</a></em> and the <em><a href="https://ss64.com/osx/syntax.html">Apple macOS How-to guides and examples</a></em></li>
<li><a href="https://getsimple.works/how-to-configure-nodejs-applications">Configuring <code>nodejs</code> applications</a></li></ul>

<p><a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:homebrew" class="hashtag"><span>#</span><span class="p-category">homebrew</span></a> <a href="https://getsimple.works/tag:UnattendedUpgrades" class="hashtag"><span>#</span><span class="p-category">UnattendedUpgrades</span></a> <a href="https://getsimple.works/tag:nginx" class="hashtag"><span>#</span><span class="p-category">nginx</span></a> <a href="https://getsimple.works/tag:y2020" class="hashtag"><span>#</span><span class="p-category">y2020</span></a> <a href="https://getsimple.works/tag:Jan2020" class="hashtag"><span>#</span><span class="p-category">Jan2020</span></a> <a href="https://getsimple.works/tag:HowTo" class="hashtag"><span>#</span><span class="p-category">HowTo</span></a> <a href="https://getsimple.works/tag:ConfiguringNodejsApplications" class="hashtag"><span>#</span><span class="p-category">ConfiguringNodejsApplications</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:TestingNodejsApplications" class="hashtag"><span>#</span><span class="p-category">TestingNodejsApplications</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/how-to-install-upstart</guid>
      <pubDate>Fri, 31 Jan 2020 22:51:29 +0000</pubDate>
    </item>
    <item>
      <title>How to install monit</title>
      <link>https://getsimple.works/how-to-install-monit?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This article revisits essentials on how to install monit monitoring system on production servers.&#xA;&#xA;  This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  Testing Nodejs Applications Book Cover&#xA;You can grab a copy of this book on this link&#xA;&#xA;There are a plethora of monitoring and logging solutions around the internet. This article will not focus on any of those, rather provide alternatives using tools already available in Linux/UNIX environments, that may achieve near same capabilities as any of those solutions. &#xA;&#xA;In this article you will learn about: &#xA;&#xA;--&#xA;&#xA;Difference between logging and monitoring&#xA;Tools available for logging &#xA;Tools available for monitoring &#xA;How to install monitoring and logging tools &#xA;How to connect end-to-end reporting for faster response times. &#xA;&#xA;Installing monit on Linux &#xA;&#xA;It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:&#xA;&#xA;$ apt-get update # Fetch list of available updates&#xA;$ apt-get upgrade # Upgrades current packages&#xA;$ apt-get dist-upgrade # Installs only new updates&#xA;Example: updating aptitude binaries&#xA;&#xA;At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.&#xA;&#xA;Installing a monit on Linux using apt&#xA;Installing monit on macOS &#xA;&#xA;In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available. &#xA;&#xA;  homebrew is a package manager and software installation tool that makes most developer tools installation a breeze. &#xA;&#xA;$ /usr/bin/ruby -e &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&#34;&#xA;Example: installation instruction as provided by brew.sh&#xA;&#xA;Generally speaking, this is how to install/uninstall things with brew &#xA;&#xA;$ brew install wget &#xA;$ brew uninstall wget &#xA;Example: installing/uninstalling wget binaries using homebrew&#xA;&#xA;  We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.&#xA;&#xA;It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help. &#xA;&#xA;$ brew outdated                   # lists all outdated packages&#xA;$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.&#xA;&#xA;$ brew upgrade                    # Upgrades all things on the system&#xA;$ brew update                     # Updates all outdated + brew itself&#xA;$ brew update formula           # Updates one formula&#xA;&#xA;$ brew install formula@version    # Installs formula at a particular version.&#xA;$ brew tap formular@version/brew  # Installs formular from third party repository&#xA;&#xA;untap/re-tap a repo when previous installation failed&#xA;$ brew untap formular &amp;&amp; brew tap formula   &#xA;$ brew services start formular@version&#xA;Example: key commands to work with homebrew cli&#xA;&#xA;  For more informations, visit: Homebrew ~ FAQ.&#xA;&#xA;Installing a monit on a macOS  using homebrew&#xA;&#xA;It is hard to deny the supremacy of monit on NIX systems, and that doesn&#39;t exclude macOS systems. Installation of monit on macOS using homebrew aligns with homebrew installation guidelines. From above templates, the next example displays how easy it is to have monit up and running. &#xA;&#xA;$ brew install monit        # Installation of latest monit&#xA;$ brew services start monit # Starting latest monit as a service &#xA;Example: installing monit using homebrew&#xA;&#xA;Installing monit on a Windows machine&#xA;&#xA;Whereas macOS  systems and Linux are quite relax when it comes to interacting with processes, Windows is a beast on its own way. monit was built for nix systems but  there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing. &#xA;&#xA;Automated upgrades &#xA;&#xA;Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.  &#xA;&#xA;In case of a critical infrastructure piece that is monitoring, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Monit is a well thought software that provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.  &#xA;&#xA;  We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches. &#xA;&#xA;In the context of Linux, we will use the unattended-upgrades package to do the work. &#xA;&#xA;$ apt-get install unattended-upgrades apticron&#xA;Example: install unattended-upgrades&#xA;&#xA;Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.&#xA;&#xA;Unattended-Upgrade::Allowed-Origins {&#xA;//  &#34;${distroid}:${distrocodename}&#34;;&#xA;    &#34;${distroid}:${distrocodename}-security&#34;; # upgrading security patches only &#xA;//   &#34;${distroid}:${distrocodename}-updates&#34;;  &#xA;//  &#34;${distroid}:${distrocodename}-proposed&#34;;&#xA;//  &#34;${distroid}:${distrocodename}-backports&#34;;&#xA;};&#xA;&#xA;Unattended-Upgrade::Package-Blacklist {&#xA;    &#34;vim&#34;;&#xA;};&#xA;Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades&#xA;&#xA;The next step is necessary to make sure  unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week. &#xA;&#xA;APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day&#xA;APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day&#xA;APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week&#xA;APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day&#xA;Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades&#xA;&#xA;This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all. &#xA;&#xA;EMAIL=&#34;email@host.tld&#34;&#xA;Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf&#xA;&#xA;Conclusion&#xA;&#xA;In this article we revisited ways to install monit on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out.&#xA;&#xA;Reading list and References&#xA;&#xA;upstart tutorial&#xA;Uptime&#xA;An A-Z Index of the Apple macOS command line (macOS bash) and the Apple macOS How-to guides and examples&#xA; Configuring nodejs applications&#xA;&#xA;#nodejs #homebrew #UnattendedUpgrades #monit #y2020 ,#Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications]]&gt;</description>
      <content:encoded><![CDATA[<p>This article revisits essentials on how to install <code>monit</code> monitoring system on production servers.</p>

<blockquote><p>This article has complementary materials to the <strong><em><a href="http://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  <a href="http://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing Nodejs Applications Book Cover"/></a>
<strong><em><a href="http://bit.ly/2ZFJytb">You can grab a copy of this book on this link</a></em></strong></p></blockquote>

<p>There are a plethora of monitoring and logging solutions around the internet. This article will not focus on any of those, rather provide alternatives using tools already available in Linux/UNIX environments, that may achieve near same capabilities as any of those solutions.</p>

<p><strong>In this article you will learn about:</strong></p>

<p><strong>—</strong></p>
<ul><li>Difference between logging and monitoring</li>
<li>Tools available for logging</li>
<li>Tools available for monitoring</li>
<li>How to install monitoring and logging tools</li>
<li>How to connect end-to-end reporting for faster response times.</li></ul>

<h2 id="installing-monit-on-linux" id="installing-monit-on-linux">Installing <code>monit</code> on Linux</h2>

<p>It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:</p>

<pre><code class="language-shell">$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates
</code></pre>

<p><em><em>Example</em>: updating aptitude binaries</em></p>

<p>At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.</p>

<h3 id="installing-a-monit-on-linux-using-apt" id="installing-a-monit-on-linux-using-apt">Installing a <code>monit</code> on Linux using <code>apt</code></h3>

<h2 id="installing-monit-on-macos" id="installing-monit-on-macos">Installing <code>monit</code> on macOS</h2>

<p>In case <code>homebrew</code> is not already available on your mac, this is how to get one up and running. On its own, <code>homebrew</code> depends on ruby runtime to be available.</p>

<blockquote><p><code>homebrew</code> is a package manager and software installation tool that makes most developer tools installation a breeze.</p></blockquote>

<pre><code class="language-shell">$ /usr/bin/ruby -e &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&#34;
</code></pre>

<p><em><em>Example:</em> installation instruction as provided by <a href="https://brew.sh/">brew.sh</a></em></p>

<p>Generally speaking, this is how to install/uninstall things with <code>brew</code></p>

<pre><code class="language-shell">$ brew install wget 
$ brew uninstall wget 
</code></pre>

<p><em><em>Example</em>: installing/uninstalling <code>wget</code> binaries using homebrew</em></p>

<blockquote><p>We have to to stress on the fact that <a href="https://brew.sh/">Homebrew</a> installs packages to their own directory and then symlinks their files into <code>/usr/local</code>.</p></blockquote>

<p>It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use <code>homebrew</code> package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.</p>

<pre><code class="language-shell">$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update &lt;formula&gt;           # Updates one formula

$ brew install &lt;formula@version&gt;    # Installs &lt;formula&gt; at a particular version.
$ brew tap &lt;formular@version&gt;/brew  # Installs &lt;formular&gt; from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap &lt;formular&gt; &amp;&amp; brew tap &lt;formula&gt;   
$ brew services start &lt;formular&gt;@&lt;version&gt;
</code></pre>

<p><em><em>Example</em>: key commands to work with <code>homebrew</code> cli</em></p>

<blockquote><p>For more informations, visit: <a href="https://docs.brew.sh/FAQ">Homebrew ~ FAQ</a>.</p></blockquote>

<h3 id="installing-a-monit-on-a-macos-using-homebrew" id="installing-a-monit-on-a-macos-using-homebrew">Installing a <code>monit</code> on a macOS  using <code>homebrew</code></h3>

<p>It is hard to deny the supremacy of <code>monit</code> on *NIX systems, and that doesn&#39;t exclude macOS systems. Installation of <code>monit</code> on macOS using <code>homebrew</code> aligns with <code>homebrew</code> installation guidelines. From above templates, the next example displays how easy it is to have <code>monit</code> up and running.</p>

<pre><code class="language-shell">$ brew install monit        # Installation of latest monit
$ brew services start monit # Starting latest monit as a service 
</code></pre>

<p><em><em>Example</em>: installing <code>monit</code> using <code>homebrew</code></em></p>

<h2 id="installing-monit-on-a-windows-machine" id="installing-monit-on-a-windows-machine">Installing <code>monit</code> on a Windows machine</h2>

<p>Whereas macOS  systems and Linux are quite relax when it comes to interacting with processes, Windows is a beast on its own way. <code>monit</code> was built for <code>*nix</code> systems but  there is no equivalent on Windows systems: Service Control Manager. It basically has the same ability to check and restart processes that are failing.</p>

<h2 id="automated-upgrades" id="automated-upgrades">Automated upgrades</h2>

<p>Following the <a href="https://semver.org/">SemVer ~ <em>aka Semantic Versioning</em></a> standard, it is not recommended to consider <strong><em>minor</em></strong>/<strong><em>major</em></strong> versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.</p>

<p>In case of a critical infrastructure piece that is monitoring, we expect breaking changes when a new version introduces a configuration setting is added, or dropped between two successive versions. Monit is a well thought software that provides backward compatibility, so chances for breaking changes between two minor versions is really minimal.</p>

<blockquote><p>We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.</p></blockquote>

<p>In the context of Linux, we will use the <strong><em>unattended-upgrades</em></strong> package to do the work.</p>

<pre><code class="language-shell">$ apt-get install unattended-upgrades apticron
</code></pre>

<p><em><em>Example</em>: install unattended-upgrades</em></p>

<p>Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.</p>

<pre><code class="language-shell">Unattended-Upgrade::Allowed-Origins {
//  &#34;${distro_id}:${distro_codename}&#34;;
    &#34;${distro_id}:${distro_codename}-security&#34;; # upgrading security patches only 
//   &#34;${distro_id}:${distro_codename}-updates&#34;;  
//  &#34;${distro_id}:${distro_codename}-proposed&#34;;
//  &#34;${distro_id}:${distro_codename}-backports&#34;;
};

Unattended-Upgrade::Package-Blacklist {
    &#34;vim&#34;;
};
</code></pre>

<p><em><em>Example</em>: fine-tune the blacklist and whitelist in <code>/etc/apt/apt.conf.d/50unattended-upgrades</code></em></p>

<p>The next step is necessary to make sure  <strong><em>unattended-upgrades</em></strong> download, install and cleanups tasks have a default period: once, twice a day or a week.</p>

<pre><code class="language-shell">APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day
</code></pre>

<p><em><em>Example</em>: tuning the tasks parameter <code>/etc/apt/apt.conf.d/20auto-upgrades</code></em></p>

<p>This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool <code>apticron</code> in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.</p>

<pre><code class="language-shell">EMAIL=&#34;&lt;email&gt;@&lt;host.tld&gt;&#34;
</code></pre>

<p><em><em>Example</em>: tuning reporting tasks email parameter <code>/etc/apticron/apticron.conf</code></em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In this article we revisited ways to install <code>monit</code> on various platforms. Even though <strong><em><a href="https://getsimple.works/how-to-configure-nodejs-applications#configure-monit-to-monitor-nodejs-application">configuration was beyond the scope of this article</a></em></strong>, we managed to get everyday quick refreshers out.</p>

<h2 id="reading-list-and-references" id="reading-list-and-references">Reading list and References</h2>
<ul><li><a href="http://howtonode.org/deploying-node-upstart-monit">upstart tutorial</a></li>
<li><a href="https://github.com/fzaninotto/uptime">Uptime</a></li>
<li><em><a href="https://ss64.com/osx/">An A-Z Index of the Apple macOS command line (macOS bash)</a></em> and the <em><a href="https://ss64.com/osx/syntax.html">Apple macOS How-to guides and examples</a></em>
<ul><li><a href="https://getsimple.works/how-to-configure-nodejs-applications">Configuring <code>nodejs</code> applications</a></li></ul></li></ul>

<p><a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:homebrew" class="hashtag"><span>#</span><span class="p-category">homebrew</span></a> <a href="https://getsimple.works/tag:UnattendedUpgrades" class="hashtag"><span>#</span><span class="p-category">UnattendedUpgrades</span></a> <a href="https://getsimple.works/tag:monit" class="hashtag"><span>#</span><span class="p-category">monit</span></a> <a href="https://getsimple.works/tag:y2020" class="hashtag"><span>#</span><span class="p-category">y2020</span></a> ,<a href="https://getsimple.works/tag:Jan2020" class="hashtag"><span>#</span><span class="p-category">Jan2020</span></a> <a href="https://getsimple.works/tag:HowTo" class="hashtag"><span>#</span><span class="p-category">HowTo</span></a> <a href="https://getsimple.works/tag:ConfiguringNodejsApplications" class="hashtag"><span>#</span><span class="p-category">ConfiguringNodejsApplications</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:TestingNodejsApplications" class="hashtag"><span>#</span><span class="p-category">TestingNodejsApplications</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/how-to-install-monit</guid>
      <pubDate>Fri, 31 Jan 2020 22:49:49 +0000</pubDate>
    </item>
    <item>
      <title>How to install nginx</title>
      <link>https://getsimple.works/how-to-install-nginx?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This article revisits essentials on how to install nginx non blocking single threaded multipurpose web server on development and production servers.&#xA;&#xA;  This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  Testing Nodejs Applications Book Cover&#xA;You can grab a copy of this book on this link  &#xA;&#xA;Installing nginx on Linux &#xA;&#xA;It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:&#xA;&#xA;$ apt-get update # Fetch list of available updates&#xA;$ apt-get upgrade # Upgrades current packages&#xA;$ apt-get dist-upgrade # Installs only new updates&#xA;Example: updating aptitude binaries&#xA;&#xA;At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.&#xA;&#xA;Installing a nginx on Linux using apt&#xA;&#xA;Updating/Upgrading or first install of &#xA;$ sudo add-apt-repository ppa:nginx/stable&#xA;$ sudo apt-get update &#xA;$ sudo apt-get install - nginx &#xA;&#xA;To restart the service:&#xA;$ sudo service nginx restart &#xA;Example: updating PPA and installing nginx binaries&#xA;&#xA;  Adding nginx PPA in first step is only required for first installs, on a system that does not have the PPA available in the system database.  &#xA;&#xA;Installing nginx on macOS &#xA;&#xA;In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available. &#xA;&#xA;  homebrew is a package manager and software installation tool that makes most developer tools installation a breeze. &#xA;&#xA;$ /usr/bin/ruby -e &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&#34;&#xA;Example: installation instruction as provided by brew.sh&#xA;&#xA;Generally speaking, this is how to install/uninstall things with brew &#xA;&#xA;$ brew install wget &#xA;$ brew uninstall wget &#xA;Example: installing/uninstalling wget binaries using homebrew&#xA;&#xA;  We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.&#xA;&#xA;It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help. &#xA;&#xA;$ brew outdated                   # lists all outdated packages&#xA;$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.&#xA;&#xA;$ brew upgrade                    # Upgrades all things on the system&#xA;$ brew update                     # Updates all outdated + brew itself&#xA;$ brew update formula           # Updates one formula&#xA;&#xA;$ brew install formula@version    # Installs formula at a particular version.&#xA;$ brew tap formular@version/brew  # Installs formular from third party repository&#xA;&#xA;untap/re-tap a repo when previous installation failed&#xA;$ brew untap formular &amp;&amp; brew tap formula   &#xA;$ brew services start formular@version&#xA;Example: key commands to work with homebrew cli&#xA;&#xA;  For more informations, visit: Homebrew ~ FAQ.&#xA;&#xA;Installing a nginx on a Mac using homebrew&#xA;&#xA;$ brew install nginx@1.17.8  # as in formula@version&#xA;Example: installing nginx using homebrew&#xA;&#xA;Installing nginx on a Windows machine&#xA;&#xA;MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a nodejs environment on a Mac. This is an easy target as nginx gives windows binaries that we can download and install on a couple of clicks.&#xA;&#xA;Automated upgrades &#xA;&#xA;Before we dive into automatic upgrades, we should consider nuances associated to managing an nginx deployment. The updates fall into two major, quite interesting, categories: patch updates and version upgrades. &#xA;&#xA;Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.  &#xA;&#xA;In case of a WebServer, breaking changes may be introduced when a critical configuration setting is added, or dropped between two successive versions. &#xA;&#xA;  We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches. &#xA;&#xA;In the context of Linux, we will use the unattended-upgrades package to do the work. &#xA;&#xA;$ apt-get install unattended-upgrades apticron&#xA;Example: install unattended-upgrades&#xA;&#xA;Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.&#xA;&#xA;Unattended-Upgrade::Allowed-Origins {&#xA;//  &#34;${distroid}:${distrocodename}&#34;;&#xA;    &#34;${distroid}:${distrocodename}-security&#34;; # upgrading security patches only &#xA;//   &#34;${distroid}:${distrocodename}-updates&#34;;  &#xA;//  &#34;${distroid}:${distrocodename}-proposed&#34;;&#xA;//  &#34;${distroid}:${distrocodename}-backports&#34;;&#xA;};&#xA;&#xA;Unattended-Upgrade::Package-Blacklist {&#xA;    &#34;vim&#34;;&#xA;};&#xA;Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades&#xA;&#xA;The next step is necessary to make sure  unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week. &#xA;&#xA;APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day&#xA;APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day&#xA;APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week&#xA;APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day&#xA;Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades&#xA;&#xA;This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all. &#xA;&#xA;EMAIL=&#34;email@host.tld&#34;&#xA;Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf&#xA; &#xA;Conclusion&#xA;&#xA;In this article we revisited ways to install nginx on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out. &#xA;&#xA;Reading list &#xA;&#xA;Configuring nodejs applications&#xA;&#xA;#nodejs #homebrew #UnattendedUpgrades #nginx #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications]]&gt;</description>
      <content:encoded><![CDATA[<p>This article revisits essentials on how to install <code>nginx</code> non blocking single threaded multipurpose web server on development and production servers.</p>

<blockquote><p>This article has complementary materials to the <strong><em><a href="http://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  <a href="http://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing Nodejs Applications Book Cover"/></a>
<strong><em><a href="http://bit.ly/2ZFJytb">You can grab a copy of this book on this link</a></em></strong></p></blockquote>

<h2 id="installing-nginx-on-linux" id="installing-nginx-on-linux">Installing <code>nginx</code> on Linux</h2>

<p>It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:</p>

<pre><code class="language-shell">$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates
</code></pre>

<p><em><em>Example</em>: updating aptitude binaries</em></p>

<p>At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.</p>

<h3 id="installing-a-nginx-on-linux-using-apt" id="installing-a-nginx-on-linux-using-apt">Installing a <code>nginx</code> on Linux using <code>apt</code></h3>

<p>Updating/Upgrading or first install of <code>nginx</code> server can be achieved with by the following commands.</p>

<pre><code class="language-shell">$ sudo add-apt-repository ppa:nginx/stable
$ sudo apt-get update 
$ sudo apt-get install - nginx 

# To restart the service:
$ sudo service nginx restart 
</code></pre>

<p><em><em>Example</em>: updating PPA and installing <code>nginx</code> binaries</em></p>

<blockquote><p>Adding <code>nginx</code> PPA in first step is only required for first installs, on a system that does not have the PPA available in the system database.</p></blockquote>

<h2 id="installing-nginx-on-macos" id="installing-nginx-on-macos">Installing <code>nginx</code> on macOS</h2>

<p>In case <code>homebrew</code> is not already available on your mac, this is how to get one up and running. On its own, <code>homebrew</code> depends on ruby runtime to be available.</p>

<blockquote><p><code>homebrew</code> is a package manager and software installation tool that makes most developer tools installation a breeze.</p></blockquote>

<pre><code class="language-shell">$ /usr/bin/ruby -e &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&#34;
</code></pre>

<p><em><em>Example:</em> installation instruction as provided by <a href="https://brew.sh/">brew.sh</a></em></p>

<p>Generally speaking, this is how to install/uninstall things with <code>brew</code></p>

<pre><code class="language-shell">$ brew install wget 
$ brew uninstall wget 
</code></pre>

<p><em><em>Example</em>: installing/uninstalling <code>wget</code> binaries using <code>homebrew</code></em></p>

<blockquote><p>We have to to stress on the fact that <a href="https://brew.sh/">Homebrew</a> installs packages to their own directory and then symlinks their files into <code>/usr/local</code>.</p></blockquote>

<p>It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use <code>homebrew</code> package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.</p>

<pre><code class="language-shell">$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update &lt;formula&gt;           # Updates one formula

$ brew install &lt;formula@version&gt;    # Installs &lt;formula&gt; at a particular version.
$ brew tap &lt;formular@version&gt;/brew  # Installs &lt;formular&gt; from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap &lt;formular&gt; &amp;&amp; brew tap &lt;formula&gt;   
$ brew services start &lt;formular&gt;@&lt;version&gt;
</code></pre>

<p><em><em>Example</em>: key commands to work with <code>homebrew</code> cli</em></p>

<blockquote><p>For more informations, visit: <a href="https://docs.brew.sh/FAQ">Homebrew ~ FAQ</a>.</p></blockquote>

<h3 id="installing-a-nginx-on-a-mac-using-homebrew" id="installing-a-nginx-on-a-mac-using-homebrew">Installing a <code>nginx</code> on a Mac using <code>homebrew</code></h3>

<pre><code class="language-shell">$ brew install nginx@1.17.8  # as in &lt;formula&gt;@&lt;version&gt;
</code></pre>

<p><em><em>Example</em>: installing <code>nginx</code> using <code>homebrew</code></em></p>

<h2 id="installing-nginx-on-a-windows-machine" id="installing-nginx-on-a-windows-machine">Installing <code>nginx</code> on a Windows machine</h2>

<p>MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a <code>nodejs</code> environment on a Mac. This is an easy target as <a href="https://nginx.org/en/docs/windows.html"><code>nginx</code></a> gives windows binaries that we can download and install on a couple of clicks.</p>

<h2 id="automated-upgrades" id="automated-upgrades">Automated upgrades</h2>

<p>Before we dive into automatic upgrades, we should consider nuances associated to managing an <code>nginx</code> deployment. The updates fall into two major, quite interesting, categories: <strong><em>patch</em></strong> updates and <strong><em>version upgrades</em></strong>.</p>

<p>Following the <a href="https://semver.org/">SemVer ~ <em>aka Semantic Versioning</em></a> standard, it is not recommended to consider <strong><em>minor</em></strong>/<strong><em>major</em></strong> versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.</p>

<p>In case of a WebServer, breaking changes may be introduced when a critical configuration setting is added, or dropped between two successive versions.</p>

<blockquote><p>We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.</p></blockquote>

<p>In the context of Linux, we will use the <strong><em>unattended-upgrades</em></strong> package to do the work.</p>

<pre><code class="language-shell">$ apt-get install unattended-upgrades apticron
</code></pre>

<p><em><em>Example</em>: install unattended-upgrades</em></p>

<p>Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.</p>

<pre><code class="language-shell">Unattended-Upgrade::Allowed-Origins {
//  &#34;${distro_id}:${distro_codename}&#34;;
    &#34;${distro_id}:${distro_codename}-security&#34;; # upgrading security patches only 
//   &#34;${distro_id}:${distro_codename}-updates&#34;;  
//  &#34;${distro_id}:${distro_codename}-proposed&#34;;
//  &#34;${distro_id}:${distro_codename}-backports&#34;;
};

Unattended-Upgrade::Package-Blacklist {
    &#34;vim&#34;;
};
</code></pre>

<p><em><em>Example</em>: fine-tune the blacklist and whitelist in <code>/etc/apt/apt.conf.d/50unattended-upgrades</code></em></p>

<p>The next step is necessary to make sure  <strong><em>unattended-upgrades</em></strong> download, install and cleanups tasks have a default period: once, twice a day or a week.</p>

<pre><code class="language-shell">APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day
</code></pre>

<p><em><em>Example</em>: tuning the tasks parameter <code>/etc/apt/apt.conf.d/20auto-upgrades</code></em></p>

<p>This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool <code>apticron</code> in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.</p>

<pre><code class="language-shell">EMAIL=&#34;&lt;email&gt;@&lt;host.tld&gt;&#34;
</code></pre>

<p><em><em>Example</em>: tuning reporting tasks email parameter <code>/etc/apticron/apticron.conf</code></em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In this article we revisited ways to install <code>nginx</code> on various platforms. Even though <strong><em><a href="https://getsimple.works/how-to-configure-nodejs-applications#configure-nginx-to-run-with-a-nodejs-server">configuration was beyond the scope of this article</a></em></strong>, we managed to get everyday quick refreshers out.</p>

<h2 id="reading-list" id="reading-list">Reading list</h2>
<ul><li><a href="https://getsimple.works/how-to-configure-nodejs-applications">Configuring <code>nodejs</code> applications</a></li></ul>

<p><a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:homebrew" class="hashtag"><span>#</span><span class="p-category">homebrew</span></a> <a href="https://getsimple.works/tag:UnattendedUpgrades" class="hashtag"><span>#</span><span class="p-category">UnattendedUpgrades</span></a> <a href="https://getsimple.works/tag:nginx" class="hashtag"><span>#</span><span class="p-category">nginx</span></a> <a href="https://getsimple.works/tag:y2020" class="hashtag"><span>#</span><span class="p-category">y2020</span></a> <a href="https://getsimple.works/tag:Jan2020" class="hashtag"><span>#</span><span class="p-category">Jan2020</span></a> <a href="https://getsimple.works/tag:HowTo" class="hashtag"><span>#</span><span class="p-category">HowTo</span></a> <a href="https://getsimple.works/tag:ConfiguringNodejsApplications" class="hashtag"><span>#</span><span class="p-category">ConfiguringNodejsApplications</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:TestingNodejsApplications" class="hashtag"><span>#</span><span class="p-category">TestingNodejsApplications</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/how-to-install-nginx</guid>
      <pubDate>Fri, 31 Jan 2020 22:49:00 +0000</pubDate>
    </item>
    <item>
      <title>How to install mongodb</title>
      <link>https://getsimple.works/how-to-install-mongodb?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This article revisits essentials on how to install mongodb, one of leading noSQL databases on development and production servers.&#xA;&#xA;  This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help those who bought the book to setup their working environment, as well as the wide audience of software developers who need same information.  &#xA;&#xA;In this article we will talk about:&#xA;&#xA;How to install mongodb on Linux, macOS and Windows.&#xA;How to stop/start automatically prior/after system restarts&#xA;&#xA;We will not talk about:&#xA;&#xA;How to configure mongodb for development and production, as that is subject of another article worth visiting. &#xA;How to manage mongodb in a production environment, either in a containerized or standalone contexts.&#xA;How to load mongodb on Docker and Kubernetes&#xA;&#xA;Installing mongodb on Linux &#xA;&#xA;It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:&#xA;&#xA;$ apt-get update        # Fetch list of available updates&#xA;$ apt-get upgrade       # Upgrades current packages&#xA;$ apt-get dist-upgrade  # Installs only new updates&#xA;Example: updating aptitude binaries&#xA;&#xA;At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.&#xA;&#xA;Installing a mongodb on Linux using apt&#xA;&#xA;Updating/Upgrading or first time fresh install of &#xA;  sudo may be skipped if the current user has permission to write and execute programs &#xA;&#xA;Add public key used by the aptitude for further updates&#xA;gnupg should be available in the system&#xA;$ apt-get install gnupg &#xA;$ wget -qO - https://www.mongodb.org/static/pgp/server-3.6.asc | sudo apt-key add - &#xA;&#xA;create and add list for mongodb (version 3.6, but variations can differ from version to version, the same applies to architecture)&#xA;$ echo &#34;deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.6 multiverse&#34; | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list&#xA;&#xA;Updating libraries and make the actual install &#xA;$ sudo apt-get update&#xA;$ sudo apt-get install -y mongodb-org&#xA;&#xA;To install specific version(3.6.17 in our example) of mongodb, the following command helps&#xA;$ sudo apt-get install -y mongodb-org=3.6.17 mongodb-org-server=3.6.17 mongodb-org-shell=3.6.17 mongodb-org-mongos=3.6.17 mongodb-org-tools=3.6.17&#xA;Example: adding mongodb PPA binaries and installing a particular binary version&#xA;&#xA;It is always a good idea to upgrade often. Breaking changes happen on major/minor binary updates, but less likely on patch upgrades. The versions goes by pair numbers, so 3.2, 3.4, 3.6 etc. The transition that skips two version may be catastrophic. For example  upgrades from any 3.x to 3.6, for this to work, there should be upgraded to an intermediate update from 3.x to 3.4, after which the update from 3.4 to 3.6 becomes possible. &#xA;&#xA;Part 1&#xA;$ apt-cache policy mongodb-org          # Checking installed MongoDB version &#xA;$ apt-get install -y mongodb-org=3.4    # Installing 3.4 MongoDB version &#xA;&#xA;Part 2   &#xA;Running mongodb&#xA;$ sudo killall mongod &amp;&amp; sleep 3 &amp;&amp; sudo service mongod start&#xA;$ sudo service mongodb start           &#xA;&#xA;Part 3 &#xA;$ mongo                                 # Accessing to mongo CLI&#xA;&#xA;Compatible Mode&#xA;$   db.adminCommand( { setFeatureCompatibilityVersion: &#34;3.4&#34; } )  &#xA;$   exit&#xA;&#xA;Part 3 &#xA;$ sudo apt-get install -y mongodb-org=3.6   # Upgrading to latest 3.6 version &#xA;Restart Server + As in Part 2.&#xA;Example: updating mongodb binaries and upgrading to a version&#xA;&#xA;Installing mongodb on macOS &#xA;&#xA;In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available. &#xA;&#xA;  homebrew is a package manager and software installation tool that makes most developer tools installation a breeze. We should also highlight that homebrew requires xcode to be available on the system. &#xA;&#xA;$ /usr/bin/ruby -e \&#xA;    &#34;$(curl -fsSL https://raw.githubusercontent.com \&#xA;    /Homebrew/install/master/install)&#34;&#xA;Example: installation instruction as provided by brew.sh&#xA;&#xA;Generally speaking, this is how to install and uninstall things with brew &#xA;&#xA;$ brew install wget &#xA;$ brew uninstall wget &#xA;Example: installing/uninstalling wget binaries using homebrew&#xA;&#xA;  We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.&#xA;&#xA;It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help. &#xA;&#xA;$ brew outdated                   # lists all outdated packages&#xA;$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.&#xA;&#xA;$ brew upgrade                    # Upgrades all things on the system&#xA;$ brew update                     # Updates all outdated + brew itself&#xA;$ brew update formula           # Updates one formula&#xA;&#xA;$ brew install formula@version    # Installs formula at a particular version.&#xA;$ brew tap formular@version/brew  # Installs formular from third party repository&#xA;&#xA;untap/re-tap a repo when previous installation failed&#xA;$ brew untap formular &amp;&amp; brew tap formula   &#xA;$ brew services start formular@version&#xA;Example: key commands to work with homebrew cli&#xA;&#xA;  For more informations, visit: Homebrew ~ FAQ.&#xA;&#xA;Installing a mongodb on a Mac using homebrew&#xA;&#xA;$ brew tap mongodb/brew &#xA;$ brew install mongodb-community@3.6&#xA;$ brew services start mongodb-community@3.6 # start mongodb as a mac service &#xA;Example: Install and running mongodb as a macOS service&#xA;&#xA;  Caveats ~ We have extra steps to make in order to start/stop automatically when the system goes up/down. This step is vital when doing development on macOS , which does not necessarily needs Linux production bound task runners.&#xA;&#xA;To have launchd start mongodb at login:&#xA;$ ln -s /usr/local/opt/mongodb/.plist ~/Library/LaunchAgents/&#xA;&#xA;Then to load mongodb now:&#xA;$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist&#xA;&#xA;To unregister and stop the service, use the following command&#xA;$ launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist&#xA;&#xA;When not want/need launchctl this command works fine&#xA;$ mongod &#xA;Example: Stop/Start when the system stops/starts&#xA;&#xA;Installing mongodb on a Windows machine&#xA;&#xA;Whereas MacOs, and most Linux distributions, come with Python and Ruby already enabled, It takes extra mile for Windows to make those two languages available. We have to stress on the fact that those two languages are somehow required to deploy a mongodb environment on most platforms, especially when working with nodejs. &#xA;&#xA;The bright side of this story is that mongodb provides windows binaries that we can downloaded and installed in a couple of clicks.&#xA;&#xA;Automated upgrades &#xA;&#xA;Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades. &#xA;&#xA;Following the SemVer ~ aka Semantic Versioning standard, it is not recommended to consider minor/major versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.  &#xA;&#xA;  We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches. &#xA;&#xA;In the context of Linux, we will use the unattended-upgrades package to do the work. &#xA;&#xA;$ apt-get install unattended-upgrades apticron&#xA;Example: install unattended-upgrades&#xA;&#xA;Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.&#xA;&#xA;Unattended-Upgrade::Allowed-Origins {&#xA;//  &#34;${distroid}:${distrocodename}&#34;;&#xA;    &#34;${distroid}:${distrocodename}-security&#34;; # upgrading security patches only &#xA;//   &#34;${distroid}:${distrocodename}-updates&#34;;  &#xA;//  &#34;${distroid}:${distrocodename}-proposed&#34;;&#xA;//  &#34;${distroid}:${distrocodename}-backports&#34;;&#xA;};&#xA;&#xA;Unattended-Upgrade::Package-Blacklist {&#xA;    &#34;vim&#34;;&#xA;};&#xA;Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades&#xA;&#xA;The next step is necessary to make sure  unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week. &#xA;&#xA;APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day&#xA;APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day&#xA;APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week&#xA;APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day&#xA;Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades&#xA;&#xA;This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all. &#xA;&#xA;EMAIL=&#34;email@host.tld&#34;&#xA;Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf&#xA;&#xA;Conclusion&#xA;&#xA;In this article we revisited ways to install mongodb on various platforms. Even though configuration was beyond the scope of this article*, we managed to get everyday quick refreshers out in the article. There are areas we wish we added more coverage, probably not on the first version of this article. &#xA;&#xA;  Some of other places where people are wondering how to install mongodb on various platforms include, but not limited to Quora, StackOverflow and Reddit. &#xA;&#xA;Reading list &#xA;&#xA;Documentation on how to Install mongodb on Ubuntu&#xA; An A-Z Index of the Apple macOS command line (macOS bash) and the Apple macOS How-to guides and examples, launchd&#xA; The answer to two questions: 1) How to know package version and 2) How to install a specific Package with aptitude. &#xA;Performing Automated In-place Cluster Updates, on Google Cloud Platform: Auto-upgrading nodes&#xA;More on configuring unattended-upgrades are in these articles: Automatic Updates,  How to Enable Unattended Upgrades on Ubuntu/Debian and How to Setup Automatic Security Updates on Ubuntu 16.04&#xA;&#xA;#nodejs #homebrew #UnattendedUpgrades #mongodb #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications]]&gt;</description>
      <content:encoded><![CDATA[<p>This article revisits essentials on how to install <code>mongodb</code>, one of leading noSQL databases on development and production servers.</p>

<blockquote><p>This article has complementary materials to the <strong><em><a href="http://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>. However, the article is designed to help those who bought the book to setup their working environment, as well as the wide audience of software developers who need same information.</p></blockquote>

<p><strong>In this article we will talk about</strong>:</p>

<p><strong>__</strong></p>
<ul><li>How to install <code>mongodb</code> on Linux, macOS and Windows.</li>
<li>How to stop/start automatically prior/after system restarts</li></ul>

<p><strong>We will not talk about</strong>:</p>

<p><strong>__</strong></p>
<ul><li>How to configure <code>mongodb</code> for development and production, as that is subject of another article worth visiting.</li>
<li>How to manage <code>mongodb</code> in a production environment, either in a containerized or standalone contexts.</li>
<li>How to load <code>mongodb</code> on Docker and Kubernetes</li></ul>

<h2 id="installing-mongodb-on-linux" id="installing-mongodb-on-linux">Installing <code>mongodb</code> on Linux</h2>

<p>It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:</p>

<pre><code class="language-shell">$ apt-get update        # Fetch list of available updates
$ apt-get upgrade       # Upgrades current packages
$ apt-get dist-upgrade  # Installs only new updates
</code></pre>

<p><em><em>Example</em>: updating aptitude binaries</em></p>

<p>At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.</p>

<h3 id="installing-a-mongodb-on-linux-using-apt" id="installing-a-mongodb-on-linux-using-apt">Installing a <code>mongodb</code> on Linux using <code>apt</code></h3>

<p>Updating/Upgrading or first time fresh install of <code>mongodb</code> can follow next scripts.</p>

<blockquote><p><strong><em><code>sudo</code></em></strong> may be skipped if the current user has permission to write and execute programs</p></blockquote>

<pre><code class="language-shell"># Add public key used by the aptitude for further updates
# gnupg should be available in the system
$ apt-get install gnupg 
$ wget -qO - https://www.mongodb.org/static/pgp/server-3.6.asc | sudo apt-key add - 

# create and add list for mongodb (version 3.6, but variations can differ from version to version, the same applies to architecture)
$ echo &#34;deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.6 multiverse&#34; | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list

# Updating libraries and make the actual install 
$ sudo apt-get update
$ sudo apt-get install -y mongodb-org

# To install specific version(3.6.17 in our example) of mongodb, the following command helps
$ sudo apt-get install -y mongodb-org=3.6.17 mongodb-org-server=3.6.17 mongodb-org-shell=3.6.17 mongodb-org-mongos=3.6.17 mongodb-org-tools=3.6.17
</code></pre>

<p><em><em>Example</em>: adding <code>mongodb</code> PPA binaries and installing a particular binary version</em></p>

<p>It is always a good idea to upgrade often. Breaking changes happen on major/minor binary updates, but less likely on patch upgrades. The versions goes by pair numbers, so 3.2, 3.4, 3.6 etc. The transition that skips two version may be catastrophic. For example  upgrades from any 3.x to 3.6, for this to work, there should be upgraded to an intermediate update from 3.x to 3.4, after which the update from 3.4 to 3.6 becomes possible.</p>

<pre><code class="language-shell"># Part 1
$ apt-cache policy mongodb-org          # Checking installed MongoDB version 
$ apt-get install -y mongodb-org=3.4    # Installing 3.4 MongoDB version 

# Part 2   
# Running mongodb
$ sudo killall mongod &amp;&amp; sleep 3 &amp;&amp; sudo service mongod start
$ sudo service mongodb start           

# Part 3 
$ mongo                                 # Accessing to mongo CLI

# Compatible Mode
$ &gt; db.adminCommand( { setFeatureCompatibilityVersion: &#34;3.4&#34; } )  
$ &gt; exit

# Part 3 
$ sudo apt-get install -y mongodb-org=3.6   # Upgrading to latest 3.6 version 
# Restart Server + As in Part 2.
</code></pre>

<p><em><em>Example</em>: updating <code>mongodb</code> binaries and upgrading to a version</em></p>

<h2 id="installing-mongodb-on-macos" id="installing-mongodb-on-macos">Installing <code>mongodb</code> on macOS</h2>

<p>In case <code>homebrew</code> is not already available on your mac, this is how to get one up and running. On its own, <code>homebrew</code> depends on ruby runtime to be available.</p>

<blockquote><p><code>homebrew</code> is a package manager and software installation tool that makes most developer tools installation a breeze. We should also highlight that <code>homebrew</code> requires <code>xcode</code> to be available on the system.</p></blockquote>

<pre><code class="language-shell">$ /usr/bin/ruby -e \
    &#34;$(curl -fsSL https://raw.githubusercontent.com \
    /Homebrew/install/master/install)&#34;
</code></pre>

<p><em><em>Example:</em> installation instruction as provided by <a href="https://brew.sh/">brew.sh</a></em></p>

<p>Generally speaking, this is how to <strong><em>install</em></strong> and <strong><em>uninstall</em></strong> things with <code>brew</code></p>

<pre><code class="language-shell">$ brew install wget 
$ brew uninstall wget 
</code></pre>

<p><em><em>Example</em>: installing/uninstalling <code>wget</code> binaries using homebrew</em></p>

<blockquote><p>We have to to stress on the fact that <a href="https://brew.sh/">Homebrew</a> installs packages to their own directory and then symlinks their files into <code>/usr/local</code>.</p></blockquote>

<p>It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use <code>homebrew</code> package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.</p>

<pre><code class="language-shell">$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update &lt;formula&gt;           # Updates one formula

$ brew install &lt;formula@version&gt;    # Installs &lt;formula&gt; at a particular version.
$ brew tap &lt;formular@version&gt;/brew  # Installs &lt;formular&gt; from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap &lt;formular&gt; &amp;&amp; brew tap &lt;formula&gt;   
$ brew services start &lt;formular&gt;@&lt;version&gt;
</code></pre>

<p><em><em>Example</em>: key commands to work with <code>homebrew</code> cli</em></p>

<blockquote><p>For more informations, visit: <a href="https://docs.brew.sh/FAQ">Homebrew ~ FAQ</a>.</p></blockquote>

<h3 id="installing-a-mongodb-on-a-mac-using-homebrew" id="installing-a-mongodb-on-a-mac-using-homebrew">Installing a <code>mongodb</code> on a Mac using <code>homebrew</code></h3>

<pre><code class="language-shell">$ brew tap mongodb/brew 
$ brew install mongodb-community@3.6
$ brew services start mongodb-community@3.6 # start mongodb as a mac service 
</code></pre>

<p><em><em>Example</em>: Install and running <code>mongodb</code> as a macOS service</em></p>

<blockquote><p><strong><em>Caveats</em></strong> ~ We have extra steps to make in order to start/stop automatically when the system goes up/down. This step is vital when doing development on macOS , which does not necessarily needs Linux production bound task runners.</p></blockquote>

<pre><code class="language-shell"># To have launchd start mongodb at login:
$ ln -s /usr/local/opt/mongodb/*.plist ~/Library/LaunchAgents/

# Then to load mongodb now:
$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist

# To unregister and stop the service, use the following command
$ launchctl unload -w ~/Library/LaunchAgents/homebrew.mxcl.mongodb.plist

# When not want/need launchctl this command works fine
$ mongod 
</code></pre>

<p><em><em>Example</em>: Stop/Start when the system stops/starts</em></p>

<h2 id="installing-mongodb-on-a-windows-machine" id="installing-mongodb-on-a-windows-machine">Installing <code>mongodb</code> on a Windows machine</h2>

<p>Whereas MacOs, and most Linux distributions, come with Python and Ruby already enabled, It takes extra mile for Windows to make those two languages available. We have to stress on the fact that those two languages are somehow required to deploy a <code>mongodb</code> environment on most platforms, especially when working with <code>nodejs</code>.</p>

<p>The bright side of this story is that <a href="https://www.mongodb.com/download-center/community"><code>mongodb</code></a> provides windows binaries that we can downloaded and installed in a couple of clicks.</p>

<h2 id="automated-upgrades" id="automated-upgrades">Automated upgrades</h2>

<p>Before we dive into automatic upgrades, we should consider nuances associated to managing a <code>mongodb</code> instance. The updates fall into two major, quite interesting, categories: <strong><em>patch</em></strong> updates and <strong><em>version upgrades</em></strong>.</p>

<p>Following the <a href="https://semver.org/">SemVer ~ <em>aka Semantic Versioning</em></a> standard, it is not recommended to consider <strong><em>minor</em></strong>/<strong><em>major</em></strong> versions for automated upgrades. One of the reasons being that these versions are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches are less susceptible to introduce breaking changes, whence ideal candidates for automated upgrades. Another among other reasons, being that security fixes are released as patches to a minor version.</p>

<blockquote><p>We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.</p></blockquote>

<p>In the context of Linux, we will use the <strong><em>unattended-upgrades</em></strong> package to do the work.</p>

<pre><code class="language-shell">$ apt-get install unattended-upgrades apticron
</code></pre>

<p><em><em>Example</em>: install unattended-upgrades</em></p>

<p>Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.</p>

<pre><code class="language-shell">Unattended-Upgrade::Allowed-Origins {
//  &#34;${distro_id}:${distro_codename}&#34;;
    &#34;${distro_id}:${distro_codename}-security&#34;; # upgrading security patches only 
//   &#34;${distro_id}:${distro_codename}-updates&#34;;  
//  &#34;${distro_id}:${distro_codename}-proposed&#34;;
//  &#34;${distro_id}:${distro_codename}-backports&#34;;
};

Unattended-Upgrade::Package-Blacklist {
    &#34;vim&#34;;
};
</code></pre>

<p><em><em>Example</em>: fine-tune the blacklist and whitelist in <code>/etc/apt/apt.conf.d/50unattended-upgrades</code></em></p>

<p>The next step is necessary to make sure  <strong><em>unattended-upgrades</em></strong> download, install and cleanups tasks have a default period: once, twice a day or a week.</p>

<pre><code class="language-shell">APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day
</code></pre>

<p><em><em>Example</em>: tuning the tasks parameter <code>/etc/apt/apt.conf.d/20auto-upgrades</code></em></p>

<p>This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool <code>apticron</code> in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.</p>

<pre><code class="language-shell">EMAIL=&#34;&lt;email&gt;@&lt;host.tld&gt;&#34;
</code></pre>

<p><em><em>Example</em>: tuning reporting tasks email parameter <code>/etc/apticron/apticron.conf</code></em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In this article we revisited ways to install <code>mongodb</code> on various platforms. Even though <em><a href="https://getsimple.works/how-to-configure-nodejs-applications#configure-mongodb-as-a-database-server-for-nodejs-project">configuration was beyond the scope of this article</a></em>, we managed to get everyday quick refreshers out in the article. There are areas we wish we added more coverage, probably not on the first version of this article.</p>

<blockquote><p>Some of other places where people are wondering how to install <code>mongodb</code> on various platforms include, but not limited to <a href="https://www.quora.com/search?q=install+mongodb">Quora</a>, <a href="https://stackoverflow.com/search?q=install+mongodb">StackOverflow</a> and <a href="https://www.reddit.com/search?q=install%20mongodb&amp;restrict_sr=">Reddit</a>.</p></blockquote>

<h2 id="reading-list" id="reading-list">Reading list</h2>
<ul><li>Documentation on how to <a href="https://docs.mongodb.com/v3.6/tutorial/install-mongodb-on-ubuntu/">Install <code>mongodb</code> on Ubuntu</a>
<ul><li><em><a href="https://ss64.com/osx/">An A-Z Index of the Apple macOS command line (macOS bash)</a></em> and the <em><a href="https://ss64.com/osx/syntax.html">Apple macOS How-to guides and examples</a></em>, <a href="https://www.launchd.info">launchd</a></li>
<li><a href="https://askubuntu.com/a/428778">The answer to two questions</a>: 1) <strong><em>How to know package version</em></strong> and 2) <strong><em>How to install a specific Package with aptitude</em></strong>.</li></ul></li>
<li><a href="https://docs.openshift.com/container-platform/3.6/install_config/upgrading/automated_upgrades.html">Performing Automated In-place Cluster Updates</a>, on Google Cloud Platform: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades">Auto-upgrading nodes</a></li>
<li>More on configuring unattended-upgrades are in these articles: <a href="https://help.ubuntu.com/lts/serverguide/automatic-updates.html">Automatic Updates</a>,  <a href="https://haydenjames.io/how-to-enable-unattended-upgrades-on-ubuntu-debian/">How to Enable Unattended Upgrades on Ubuntu/Debian</a> and <a href="https://www.howtoforge.com/tutorial/how-to-setup-automatic-security-updates-on-ubuntu-1604/">How to Setup Automatic Security Updates on Ubuntu 16.04</a></li></ul>

<p><a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:homebrew" class="hashtag"><span>#</span><span class="p-category">homebrew</span></a> <a href="https://getsimple.works/tag:UnattendedUpgrades" class="hashtag"><span>#</span><span class="p-category">UnattendedUpgrades</span></a> <a href="https://getsimple.works/tag:mongodb" class="hashtag"><span>#</span><span class="p-category">mongodb</span></a> <a href="https://getsimple.works/tag:y2020" class="hashtag"><span>#</span><span class="p-category">y2020</span></a> <a href="https://getsimple.works/tag:Jan2020" class="hashtag"><span>#</span><span class="p-category">Jan2020</span></a> <a href="https://getsimple.works/tag:HowTo" class="hashtag"><span>#</span><span class="p-category">HowTo</span></a> <a href="https://getsimple.works/tag:ConfiguringNodejsApplications" class="hashtag"><span>#</span><span class="p-category">ConfiguringNodejsApplications</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:TestingNodejsApplications" class="hashtag"><span>#</span><span class="p-category">TestingNodejsApplications</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/how-to-install-mongodb</guid>
      <pubDate>Fri, 31 Jan 2020 22:47:36 +0000</pubDate>
    </item>
    <item>
      <title>How to install redis</title>
      <link>https://getsimple.works/how-to-install-redis?pk_campaign=rss-feed</link>
      <description>&lt;![CDATA[This article revisits essentials on how to install redis key/value data store on development and production servers.&#xA;&#xA;  This article has complementary materials to the Testing nodejs Applications book. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  Testing Nodejs Applications Book Cover&#xA;You can grab a copy of this book on this link  &#xA;&#xA;Installing redis on Linux &#xA;&#xA;It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:&#xA;&#xA;$ apt-get update # Fetch list of available updates&#xA;$ apt-get upgrade # Upgrades current packages&#xA;$ apt-get dist-upgrade # Installs only new updates&#xA;Example: updating aptitude binaries&#xA;&#xA;At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.&#xA;&#xA;Installing a redis on Linux using apt&#xA;&#xA;Updating   The &#xA;Installing binaries &#xA;$ curl -O http://download.redis.io/redis-stable.tar.gz&#xA;$ tar xzvf redis-stable.tar.gz &amp;&amp; cd redis-stable &#xA;$ make &amp;&amp; make install&#xA;Configure redis - for first time installs &#xA;&#xA;Install via PPA &#xA;$ apt-get install -y python-software-properties #provides access to add-apt-repository &#xA;$ apt-get install -y software-properties-common python-software-properties&#xA;&#xA;@link https://packages.ubuntu.com/bionic/redis &#xA;$ add-apt-repository -y ppa:bionic/redis # Alternatively rwky/redis&#xA;$ apt-get update&#xA;$ apt-get install -y redis-server&#xA;&#xA;Starting Redis for development on Mac&#xA;$ redis-server /usr/local/etc/redis.conf &#xA;Example: installing redis binaries with aptitude&#xA;&#xA;Installing redis on a Mac system &#xA; &#xA;In case homebrew is not already available on your mac, this is how to get one up and running. On its own, homebrew depends on ruby runtime to be available. &#xA;&#xA;  homebrew is a package manager and software installation tool that makes most developer tools installation a breeze. &#xA;&#xA;$ /usr/bin/ruby -e &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&#34;&#xA;Example: installation instruction as provided by brew.sh&#xA;&#xA;Generally speaking, this is how to install/uninstall things with brew &#xA;&#xA;$ brew install wget &#xA;$ brew uninstall wget &#xA;Example: installing/uninstalling wget binaries using homebrew&#xA;&#xA;  We have to to stress on the fact that Homebrew installs packages to their own directory and then symlinks their files into /usr/local.&#xA;&#xA;It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use homebrew package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help. &#xA;&#xA;$ brew outdated                   # lists all outdated packages&#xA;$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.&#xA;&#xA;$ brew upgrade                    # Upgrades all things on the system&#xA;$ brew update                     # Updates all outdated + brew itself&#xA;$ brew update formula           # Updates one formula&#xA;&#xA;$ brew install formula@version    # Installs formula at a particular version.&#xA;$ brew tap formular@version/brew  # Installs formular from third party repository&#xA;&#xA;untap/re-tap a repo when previous installation failed&#xA;$ brew untap formular &amp;&amp; brew tap formula   &#xA;$ brew services start formular@version&#xA;Example: key commands to work with homebrew cli&#xA;&#xA;  For more informations, visit: Homebrew ~ FAQ.&#xA;&#xA;Installing a redis on a macOS  using curl&#xA;&#xA;Installing redis via a curl command is not that different as on Linux system. The following instructions can accomplish the installation. &#xA;&#xA;Installing binaries &#xA;$ curl -O http://download.redis.io/redis-stable.tar.gz&#xA;$ tar xzvf redis-stable.tar.gz &amp;&amp; cd redis-stable &#xA;$ make &amp;&amp; make install&#xA;Configure redis - for first time installs &#xA;Example: install redis binaries using curl and make&#xA;&#xA;Installing a redis on a Mac using homebrew&#xA;&#xA;$ brew install redis        # Installation following formula@version template &#xA;$ brew services start redis # Starting redis as a service  &#xA;&#xA;Alternatively start as usual &#xA;$ redis-server /usr/local/etc/redis.conf&#xA;Running on port: 6379&#xA;Example: install redis binaries using homebrew&#xA;&#xA;Installing redis on a Windows machine&#xA;&#xA;MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a nodejs environment. This is an easy target as redis gives windows binaries that we can download and install on a couple of clicks.&#xA;&#xA;Automated upgrades &#xA;&#xA;Before we dive into automatic upgrades, we should consider nuances associated to managing a mongodb instance. The updates fall into two major, quite interesting, categories: patch updates and version upgrades. &#xA;&#xA;Following the SemVer ~ aka Semantic Versioning standard, it is recommended that the only pair minor versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches do not introduce breaking changes. Those can therefore be automated. &#xA;&#xA;  We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches. &#xA;&#xA;In the context of Linux, we will use the unattended-upgrades package to do the work. &#xA;&#xA;$ apt-get install unattended-upgrades apticron&#xA;Example: install unattended-upgrades&#xA;&#xA;Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.&#xA;&#xA;Unattended-Upgrade::Allowed-Origins {&#xA;//  &#34;${distroid}:${distrocodename}&#34;;&#xA;    &#34;${distroid}:${distrocodename}-security&#34;; # upgrading security patches only &#xA;//   &#34;${distroid}:${distrocodename}-updates&#34;;  &#xA;//  &#34;${distroid}:${distrocodename}-proposed&#34;;&#xA;//  &#34;${distroid}:${distrocodename}-backports&#34;;&#xA;};&#xA;&#xA;Unattended-Upgrade::Package-Blacklist {&#xA;    &#34;vim&#34;;&#xA;};&#xA;Example: fine-tune the blacklist and whitelist in /etc/apt/apt.conf.d/50unattended-upgrades&#xA;&#xA;The next step is necessary to make sure  unattended-upgrades download, install and cleanups tasks have a default period: once, twice a day or a week. &#xA;&#xA;APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day&#xA;APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day&#xA;APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week&#xA;APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day&#xA;Example: tuning the tasks parameter /etc/apt/apt.conf.d/20auto-upgrades&#xA;&#xA;This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool apticron in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all. &#xA;&#xA;EMAIL=&#34;email@host.tld&#34;&#xA;Example: tuning reporting tasks email parameter /etc/apticron/apticron.conf&#xA;&#xA;Conclusion&#xA;&#xA;In this article we revisited ways to install redis on various platforms. Even though configuration was beyond the scope of this article, we managed to get everyday quick refreshers out. &#xA;&#xA;Reading list &#xA;&#xA;Install and config Redis on Mac OS X via Homebrew&#xA;Configuring nodejs applications&#xA;&#xA;#nodejs #homebrew #UnattendedUpgrades #nvm #n #y2020 #Jan2020 #HowTo #ConfiguringNodejsApplications #tdd #TestingNodejsApplications]]&gt;</description>
      <content:encoded><![CDATA[<p>This article revisits essentials on how to install <code>redis</code> key/value data store on development and production servers.</p>

<blockquote><p>This article has complementary materials to the <strong><em><a href="http://bit.ly/2ZFJytb">Testing <code>nodejs</code> Applications book</a></em></strong>. However, the article is designed to help both those who already bought the book, as well as the wide audience of software developers  to setup working environment.  <a href="http://bit.ly/2ZFJytb"><img src="https://snap.as/a/42OS2vs.png" alt="Testing Nodejs Applications Book Cover"/></a>
<strong><em><a href="http://bit.ly/2ZFJytb">You can grab a copy of this book on this link</a></em></strong></p></blockquote>

<h2 id="installing-redis-on-linux" id="installing-redis-on-linux">Installing <code>redis</code> on Linux</h2>

<p>It is always a good idea to update the system before start working. There is no exception, even when a daily task updates automatically binaries. That can be achieved on Ubuntu and Aptitude enabled systems as following:</p>

<pre><code class="language-shell">$ apt-get update # Fetch list of available updates
$ apt-get upgrade # Upgrades current packages
$ apt-get dist-upgrade # Installs only new updates
</code></pre>

<p><em><em>Example</em>: updating aptitude binaries</em></p>

<p>At this point most of packages should be installed or upgraded. Except Packages whose PPA have been removed or not available in the registry. Installing software can be done by installing binaries, or using Ubuntu package manager.</p>

<h3 id="installing-a-redis-on-linux-using-apt" id="installing-a-redis-on-linux-using-apt">Installing a <code>redis</code> on Linux using <code>apt</code></h3>

<p>Updating <code>redis</code> – In case the above global update didn&#39;t take effect.
  The <code>-y</code> option passes YES, so there are no prompt for <code>Y/N?</code> while installing libraries.</p>

<pre><code class="language-shell"># Installing binaries 
$ curl -O http://download.redis.io/redis-stable.tar.gz
$ tar xzvf redis-stable.tar.gz &amp;&amp; cd redis-stable 
$ make &amp;&amp; make install
# Configure redis - for first time installs 

# Install via PPA 
$ apt-get install -y python-software-properties #provides access to add-apt-repository 
$ apt-get install -y software-properties-common python-software-properties

# @link https://packages.ubuntu.com/bionic/redis 
$ add-apt-repository -y ppa:bionic/redis # Alternatively rwky/redis
$ apt-get update
$ apt-get install -y redis-server

# Starting Redis for development on Mac
$ redis-server /usr/local/etc/redis.conf 
</code></pre>

<p><em><em>Example</em>: installing <code>redis</code> binaries with aptitude</em></p>

<h2 id="installing-redis-on-a-mac-system" id="installing-redis-on-a-mac-system">Installing <code>redis</code> on a Mac system</h2>

<p>In case <code>homebrew</code> is not already available on your mac, this is how to get one up and running. On its own, <code>homebrew</code> depends on ruby runtime to be available.</p>

<blockquote><p><code>homebrew</code> is a package manager and software installation tool that makes most developer tools installation a breeze.</p></blockquote>

<pre><code class="language-shell">$ /usr/bin/ruby -e &#34;$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)&#34;
</code></pre>

<p><em><em>Example:</em> installation instruction as provided by <a href="https://brew.sh/">brew.sh</a></em></p>

<p>Generally speaking, this is how to install/uninstall things with <code>brew</code></p>

<pre><code class="language-shell">$ brew install wget 
$ brew uninstall wget 
</code></pre>

<p><em><em>Example</em>: installing/uninstalling <code>wget</code> binaries using homebrew</em></p>

<blockquote><p>We have to to stress on the fact that <a href="https://brew.sh/">Homebrew</a> installs packages to their own directory and then symlinks their files into <code>/usr/local</code>.</p></blockquote>

<p>It is always a good idea to update the system before start working. And that, even when we have a daily task that automatically updates the system for us. macOS  can use <code>homebrew</code> package manager on maintenance matters. To update/upgrade or check outdated packages, following commands would help.</p>

<pre><code class="language-shell">$ brew outdated                   # lists all outdated packages
$ brew cleanup -n                 # visualize the list of things are going to be cleaned up.

$ brew upgrade                    # Upgrades all things on the system
$ brew update                     # Updates all outdated + brew itself
$ brew update &lt;formula&gt;           # Updates one formula

$ brew install &lt;formula@version&gt;    # Installs &lt;formula&gt; at a particular version.
$ brew tap &lt;formular@version&gt;/brew  # Installs &lt;formular&gt; from third party repository

# untap/re-tap a repo when previous installation failed
$ brew untap &lt;formular&gt; &amp;&amp; brew tap &lt;formula&gt;   
$ brew services start &lt;formular&gt;@&lt;version&gt;
</code></pre>

<p><em><em>Example</em>: key commands to work with <code>homebrew</code> cli</em></p>

<blockquote><p>For more informations, visit: <a href="https://docs.brew.sh/FAQ">Homebrew ~ FAQ</a>.</p></blockquote>

<h3 id="installing-a-redis-on-a-macos-using-curl" id="installing-a-redis-on-a-macos-using-curl">Installing a <code>redis</code> on a macOS  using <code>curl</code></h3>

<p>Installing <code>redis</code> via a <code>curl</code> command is not that different as on Linux system. The following instructions can accomplish the installation.</p>

<pre><code class="language-shell"># Installing binaries 
$ curl -O http://download.redis.io/redis-stable.tar.gz
$ tar xzvf redis-stable.tar.gz &amp;&amp; cd redis-stable 
$ make &amp;&amp; make install
# Configure redis - for first time installs 
</code></pre>

<p><em><em>Example</em>: install <code>redis</code> binaries using <code>curl</code> and <code>make</code></em></p>

<h3 id="installing-a-redis-on-a-mac-using-homebrew" id="installing-a-redis-on-a-mac-using-homebrew">Installing a <code>redis</code> on a Mac using <code>homebrew</code></h3>

<pre><code class="language-shell">$ brew install redis        # Installation following &lt;formula&gt;@&lt;version&gt; template 
$ brew services start redis # Starting redis as a service  

# Alternatively start as usual 
$ redis-server /usr/local/etc/redis.conf
# Running on port: 6379
</code></pre>

<p><em><em>Example</em>: install <code>redis</code> binaries using <code>homebrew</code></em></p>

<h2 id="installing-redis-on-a-windows-machine" id="installing-redis-on-a-windows-machine">Installing <code>redis</code> on a Windows machine</h2>

<p>MacOs comes with Python and Ruby already enabled, these two languages are somehow required to run successfully a <code>nodejs</code> environment. This is an easy target as <a href="https://redis.io/download"><code>redis</code></a> gives windows binaries that we can download and install on a couple of clicks.</p>

<h2 id="automated-upgrades" id="automated-upgrades">Automated upgrades</h2>

<p>Before we dive into automatic upgrades, we should consider nuances associated to managing a <code>mongodb</code> instance. The updates fall into two major, quite interesting, categories: <strong><em>patch</em></strong> updates and <strong><em>version upgrades</em></strong>.</p>

<p>Following the <a href="https://semver.org/">SemVer ~ <em>aka Semantic Versioning</em></a> standard, it is recommended that the only pair <strong><em>minor</em></strong> versions be considered for version upgrades. This is because minor versions, as well as major versions, are subject to introducing breaking changes or incompatibility between two versions.  On the other hand, patches do not introduce breaking changes. Those can therefore be automated.</p>

<blockquote><p>We should highlight that it is always better to upgrade at deployment time. The process is even easier in containerized context. We should also automate only patches, to avoid to miss security patches.</p></blockquote>

<p>In the context of Linux, we will use the <strong><em>unattended-upgrades</em></strong> package to do the work.</p>

<pre><code class="language-shell">$ apt-get install unattended-upgrades apticron
</code></pre>

<p><em><em>Example</em>: install unattended-upgrades</em></p>

<p>Two things to fine-tune to make this solution work are: to enable a blacklist of packages we do not to automatically update, and two, to enable particular packages we would love to update on a periodical basis. That is compiled in the following shell scripts.</p>

<pre><code class="language-shell">Unattended-Upgrade::Allowed-Origins {
//  &#34;${distro_id}:${distro_codename}&#34;;
    &#34;${distro_id}:${distro_codename}-security&#34;; # upgrading security patches only 
//   &#34;${distro_id}:${distro_codename}-updates&#34;;  
//  &#34;${distro_id}:${distro_codename}-proposed&#34;;
//  &#34;${distro_id}:${distro_codename}-backports&#34;;
};

Unattended-Upgrade::Package-Blacklist {
    &#34;vim&#34;;
};
</code></pre>

<p><em><em>Example</em>: fine-tune the blacklist and whitelist in <code>/etc/apt/apt.conf.d/50unattended-upgrades</code></em></p>

<p>The next step is necessary to make sure  <strong><em>unattended-upgrades</em></strong> download, install and cleanups tasks have a default period: once, twice a day or a week.</p>

<pre><code class="language-shell">APT::Periodic::Update-Package-Lists &#34;1&#34;;            # Updates package list once a day
APT::Periodic::Download-Upgradeable-Packages &#34;1&#34;;   # download upgrade candidates once a day
APT::Periodic::AutocleanInterval &#34;7&#34;;               # clean week worth of unused packages once a week
APT::Periodic::Unattended-Upgrade &#34;1&#34;;              # install downloaded packages once a day
</code></pre>

<p><em><em>Example</em>: tuning the tasks parameter <code>/etc/apt/apt.conf.d/20auto-upgrades</code></em></p>

<p>This approach works on Linux(Ubuntu), especially deployed in production, but not Windows nor macOS. The last issue, is to be able to report problems when an update fails, so that a human can intervene whenever possible. That is where the second tool <code>apticron</code> in first paragraph intervenes. To make it work, we will specify which email to send messages to, and that will be all.</p>

<pre><code class="language-shell">EMAIL=&#34;&lt;email&gt;@&lt;host.tld&gt;&#34;
</code></pre>

<p><em><em>Example</em>: tuning reporting tasks email parameter <code>/etc/apticron/apticron.conf</code></em></p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>In this article we revisited ways to install <code>redis</code> on various platforms. Even though <strong><em><a href="https://getsimple.works/how-to-configure-nodejs-applications#configure-redis-to-run-with-a-nodejs-server">configuration was beyond the scope of this article</a></em></strong>, we managed to get everyday quick refreshers out.</p>

<h2 id="reading-list" id="reading-list">Reading list</h2>
<ul><li><a href="https://medium.com/@petehouston/install-and-config-redis-on-mac-os-x-via-homebrew-eb8df9a4f298">Install and config Redis on Mac OS X via Homebrew</a></li>
<li><a href="https://getsimple.works/how-to-configure-nodejs-applications">Configuring <code>nodejs</code> applications</a></li></ul>

<p><a href="https://getsimple.works/tag:nodejs" class="hashtag"><span>#</span><span class="p-category">nodejs</span></a> <a href="https://getsimple.works/tag:homebrew" class="hashtag"><span>#</span><span class="p-category">homebrew</span></a> <a href="https://getsimple.works/tag:UnattendedUpgrades" class="hashtag"><span>#</span><span class="p-category">UnattendedUpgrades</span></a> <a href="https://getsimple.works/tag:nvm" class="hashtag"><span>#</span><span class="p-category">nvm</span></a> <a href="https://getsimple.works/tag:n" class="hashtag"><span>#</span><span class="p-category">n</span></a> <a href="https://getsimple.works/tag:y2020" class="hashtag"><span>#</span><span class="p-category">y2020</span></a> <a href="https://getsimple.works/tag:Jan2020" class="hashtag"><span>#</span><span class="p-category">Jan2020</span></a> <a href="https://getsimple.works/tag:HowTo" class="hashtag"><span>#</span><span class="p-category">HowTo</span></a> <a href="https://getsimple.works/tag:ConfiguringNodejsApplications" class="hashtag"><span>#</span><span class="p-category">ConfiguringNodejsApplications</span></a> <a href="https://getsimple.works/tag:tdd" class="hashtag"><span>#</span><span class="p-category">tdd</span></a> <a href="https://getsimple.works/tag:TestingNodejsApplications" class="hashtag"><span>#</span><span class="p-category">TestingNodejsApplications</span></a></p>
]]></content:encoded>
      <guid>https://getsimple.works/how-to-install-redis</guid>
      <pubDate>Fri, 31 Jan 2020 22:45:40 +0000</pubDate>
    </item>
  </channel>
</rss>