How to choose the right tools
There is an ever-ending war about the best tools to use in any developer community. Some tools fit well with the project at hand, but when they sunset for various reasons, remorse starts crippling in.
In this blog, we present a simple framework to adapt when choosing any software development tool, and how to manage the risk that comes with the tools we adopt.
In this article we will talk about:
- Choosing a (testing/software development) framework
- Big-bang is fatal, using adoption curve to manage adoption risk
This article has complementary materials to the Testing
nodejs
Applications book. However, the article is designed to help those who bought the book to set up their working environment, as well as the wide audience of software developers who need the same information.
The framework
We tend to use tools we love and tend to impose those tools on teams or customers we work with. This pull approach works sometimes, but can also create tribalism talk, which most of the time push back from our peers. Adopting a tool for a group of people requires thinking and look beyond our preferences. The question is how do we get there.
This framework of choosing tools shifts focus from our sentiments to problems we are trying to solve instead. The algorithm is simple: instead of starting from a tool suggestion, start from the problem, see if there is not an existing solution, then compare how various tools solve the problem at hand.
Focusing on the problem
It sounds easy to transition from suggesting tools to focus on the problem. From experience though, the suggestion of tools gets as heated as choosing the next problem to tackle first.
One can argue that debate around the most important issue becomes evident when some problem is of utter urgency such as having a database down or downtime due to running out of memory or CPU. However, when everything is operational, subtle elements should be in play.
The first tool is based on systems that have emergency data collection in place. The 80/20 rule. There is a known realization in the economics world known as [The Pareto principle](https://en.wikipedia.org/wiki/Paretoprinciple)_. In computing world, such a realization shows that “roughly 80% of the effects come from 20% of the causes”. From Microsofts early days, “by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated”.
The quotes provided in the previous chapter all come from Wiki/Pareto_principle page.
The result of this first exploratory experiment is a cluster of problems that need to be heuristically classified in order of importance. Any arbitrary classification can work. That is where yet another tool comes in handy to identify immediate problems to work on.
The Eisenhower Box is a decision-making tool that helps to determine which problem to take on next. It is a 2x2 matrix, with columns [urgent, not urgent] and two rows of [important, not important]. All of the. Cross-section of [urgent x important] has to be done right away. [not urgent x important] can be scheduled for later. [not important x urgent] can be delegated for immediate resolution. And anything that is [not important x not urgent] can be dismissed. But since all the clusters being analyzed in the 20% that need fixing, the [not important x not urgent] can be re-introduced in the next work iterations.
The new smaller cluster in [urgent x important] can be classified in order of importance using a weighted matrix, but these tools will be discussed a little more when evaluating the tools. It can still be used if the [urgent x important] is in order of 10 or 100 and a consensus cannot be reached.
urgent | not urgent | |
---|---|---|
Important | do | schedule |
Not Important | delegate |
Table: The Eisenhower box
Choosing the tool
The variety of choices for any tools(libraries, frameworks, etc.) especially in JavaScript space is staggering. That is a good thing in a sense but unfortunately comes with a hefty choice paralysis price to pay.
To build a data-driven consensus, a weighted decision matrix can be used. The tool with the highest score becomes ipso facto the tool of the team/project. The problem is how do identify criteria and weight each criterion exercise on our decision making as a collective?
There are things we all agree on, amongst those are:
- taste ~ it is OK to like a tool for the sake of loving it.
- trendiness ~ if the tool is really popular that influence has to be counted as well
- learning curve: How long it takes to debug/learn versus benefiting from it.
- integration(plug&play): How easy to integrate with the existing testing framework.
- community/help: How good is documentation or if the product has a dedicated support team
- openness/affordability: closed technology, open-source and free
- stability: How active is the backing community
- completeness: How good the testing framework is maintained(LTS etc.)
Even though there is a wide range of pretty good tools to choose from, taste being one of them, here is a set of factors you may consider while selecting your killer tool:
Once criteria of adoption are identified, a weight can be chosen(heuristically or objectively). In the case of dissents, a vote can be required to resolve issues really fast. In case there are not enough, or folks have limited knowledge outside their comfort zone, there are avenues where this kind of information can be gathered: technology radar, existing tools in our codebase, marketplaces, Github, tech blogs, etc.
The following example shows how to choose a testing framework, after identifying that the main issue is the tests are not written due to poor testing overall experience!
Products/criteria | taste | integration | completeness | Score | Rank |
---|---|---|---|---|---|
Weight –> | 4 | 3 | 2 | ||
ava |
1x4=4 | 1x3=3 | 0x2=0 | 7 | 2 |
jest |
1x4=4 | 0x3=0 | 1x2=2 | 6 | 3 |
mocha |
1x4=4 | 1x3=3 | 1x2=2 | 9 | 1 |
assert |
0x4=0 | 1x3=3 | 1x2=2 | 5 | 4 |
jasmine |
1x4=4 | 1x3=3 | 0x2=0 | 7 | 2 |
Table: Weighed Decision Matrix
The score can be a Yes/No on each criterion, a vote count. We used 0 and 1 based on feeling(or majority votes). In case results are not conclusive, we can use a different scoring board, or eliminate those not well-performing and start over.
Managing Adoption risks
Buyers remorse is one of the risks associated with adopting new tools. In addition to that, a product that we just adopted can be abruptly sunset. A change in policy(price increase, stop serving a category of an industry, etc.)
We need a strategy that allows us to adopt a tool, but also to limit the damage caused by shortcomings of the tool, especially when tools are proven to be not as advertised. Or being caught up between sandwiches when a product is sunset. Or deal with any other price we pay when adopting an open-source tool(such as unresolved bugs).
Adopting a tool in steps instead of big-bang, proves to be effective when it comes to managing risk. On our adoption graph, we have time on the x-axis and adoption on the y-axis. The three phases are the exploratory phase, expansion phase, and adoption organization-wide. The adoption curve is an S-shaped curve. The opposite of the adoption curve is the sunset curve where we pull the plug on the existing tool gradually.
Example: How Big Technical Changes Happen at Slack
Conclusion
Adopting new tools requires looking beyond tribalism, it is hard to imagine any developer giving up on their beloved tool. Adding a new tool to a shared toolkit goes beyond one’s individual choice, and comes with shared risk, that has to be managed as a collective.
In this blog, we discussed a simple adaptable framework that can be used when choosing any framework.