6 Questions for your next JavaScript project

Published: 2014-11-22 by Lars  codethoughts

Introduction

When starting a new front-end JavaScript project, we have a lot of decisions to make. The JavaScript ecosystem has grown explosively over the past couple of years with exciting new tools, libraries and frameworks appearing on a weekly basis. To illustrate, look at the TodoMVC site that showcases different MVC frameworks: 15 main frameworks and more than 25 experimental frameworks are included. I personally know of projects using at least six of those.

It's not easy to stay up-to-date with all this activity. And we cannot simply present a list of recommendations, because different projects have different requirements. Instead, we should ask ourselves key questions about our project, and then spend some time figuring out which combination of tools, libraries and frameworks will best suit our project and our specific requirements.

Based on my experience with starting a number of successful JavaScript projects over the past 5 years, this blog post presents 6 of those key questions. I have included questions that don't have a simple answer. I also skipped a number of additional questions that are either easy to answer (use Grunt or Gulp for build automation, use Mocha, Karma and Istanbul for unit testing and code coverage) and questions that I don't have anything valuable to say about (Which CSS pre-processor should we use).

Here are the 6 questions:

  1. How do we keep our code modular?
  2. How do we optimize the payload delivered to browsers?
  3. Should we focus on unit testing or end-to-end testing?
  4. How do we make asynchronous code easy to maintain?
  5. Which MVC framework should we use?
  6. How do we deploy upgrades?

Question 1: How do we keep our code modular?

Modular code gives us a number of benefits: we can easily encapsulate private implementation details inside the module. Modules make dependencies explicit and, together with a module repository, provides versioning at an adequate level of granularity. The JavaScript world have popularized two different ways to write modular code, AMD and CommonJS.

Traditionally AMD has been used for the browser and CommonJS has been used for Node. The most popular AMD implementation is probably RequireJS.

However, people who use Node on the back-end prefer to use the same style for the front-end, and it is possible to use CommonJS for the browser, by using a tool like Browserify.

And we can even avoid picking one over the other, by using a modern module loader like Webpack.

To handle dependencies and versioning of third-party modules, and maybe also our own modules, we need a module repository. The most popular module repository for the browser is Bower.

Originally npm was the module repository for Node, but is now also used to host many front-end modules, again mostly for the benefit of people already using Node on the back-end.

If using npm, be careful to use peerDependencies to ensure a flat dependency tree, something that Bower provides by default. This is important to avoid duplicate subdependencies which might be okay for Node but typically something you want to avoid for the browser.

On the horizon, ES6 modules is currently being standardized, but are not yet supported by current browsers, although some transpiler projects exists.

I have pretty good experience using RequireJS and Bower, and I am looking forward to learn more about the benefits that webpack provides.

Question 2: How do we optimize the payload delivered to browsers?

Using a modular coding style, we'll end up with a lot of small individual source files. Loading all those source files to the browser is very inefficient, especially over mobile networks. The upcoming HTTP/2 protocol, aka SPDY, might solve some of these issues, but until then, it is important to bundle source files into fewer, compressed, payloads to be loaded by the browser. This applies not only to JavaScript files, but also to CSS files, HTML templates and images.

We will have to think about how coarse-grained we want those payloads to be. Bundling everything into a single payload leads to large up-front load times, especially on slow devices. Not all resources are going to be used by the application immediately (think: pages later in the user's workflow) or will not be used at all (think: A/B testing).

Another optimization to think about is generating the fully rendered HTML page server-side and only using client-side code to make updates to the page. To avoid having to duplicate the HTML rendering code, frameworks exist that allow us to run the same HTML rendering code both server-side and client-side, sometimes called isomorphic JavaScript, like Rendr.

Tools like RequireJS, Browserify and webpack, mentioned above, all provide facilities for bundling resources into fewer, compressed payloads, and controlling the granularity.

I have pretty good experience using RequireJS for bundling, and I would like to try isomorphic rendering with a tool like Rendr.

Question 3: Should we focus on unit testing or end-to-end testing?

Today we have great tools for both unit testing and end-to-end testing front-end JavaScript code. I recommend Mocha as test framework, as it has the best and most prospering set of useful plugins and is getting quite popular. Use Mocha together with Chai as the assertion library. Run the tests in PhantomJS or a real browser with Karma, which also produces great code coverage reports using Istanbul.

For end-to-end tests I recommend Nightwatch which provides you the ability to write test scripts in JavaScript and uses Selenium below the hood to drive the browser.

However, testing a user interface has always been difficult, and testing browser-based UI is no different. In my experience, end-to-end testing will never be as productive as unit testing. However, end-to-end testing is easier to get started with for most development teams, and many of the projects I have worked on have ended up spending a lot of time and frustration that could have been avoided if they had focused more on unit testing.

I agree with Martin Fowler and Mike Cohn and recommend keeping a few end-to-end tests and then test the vast majority of the requirements with unit tests

Some elements of a front-end project are easy to unit test, especially the MVC code: models, views and controllers. Some things are harder to test, and we have to figure out how we can write unit tests for our particular environment. Here are some examples.

Real browsers differ from each other in subtle ways and we want to test that our code works correctly across the targets we want to support. Karma allows us to run our unit tests in real browsers, and by using a browser service, like BrowserStack, we can even run those unit tests from our Continuous Integration server.

Our model layer usually talks to one or more back-end services, and it is important that the expectation we have to the APIs of those services match the actual APIs. One way to write unit tests for this is to have the back-end build process produce mock service responses as text files that can be consumed by the front-end model unit tests.

Not all of our front-end logic is necessarily written in JavaScript. Requirements such as responsive design is usually implemented with media queries in CSS. Luckily, we can also write unit tests for CSS.

Question 4: How do we make asynchronous code easy to maintain?

JavaScript is single-threaded, which is a good thing, because multi-threaded code is extremely difficult to get right for most development teams. (Note that Web Workers are more akin to separate processes than to threads, because they do not share any state and only communicate with each other through message passing). To avoid blocking this single thread, JavaScript relies on asynchronous code to wait for long-running operations, like HTTP requests or CSS transition events.

Traditionally we would write asynchronous code using callbacks, like this:

$.ajax({...}, function (response) {
   ... handle response
}, function (err) {
   ... handle error
});

This works well for simple scenarios. But sometimes we need to wait for more complex conditions, like waiting for a number of asynchronous operations to either succeed or fail. A popular abstraction to handle such scenarios are called promises (or futures or then-ables), which is now part of the upcoming ES6 standard, and is supported by most modern browsers. The example before becomes:

$.ajax({...})
.then(function (response) {
   ... handle response
})
.catch(function (err) {
   ... handle error
});

Several libraries exist to polyfill the Promise specification for older browsers.

Popular libraries exist that extends the standard Promise API with additional handy features are Bluebird and Q.

I have pretty good experience with both Bluebird and Q.

Question 5: Which MVC framework should we use?

As mentioned in the introduction, there are a ton of MVC frameworks out there. The MVC design pattern (with variations) has more or less become the defacto design for user interface applications.

MVC frameworks differ from each other in a number of different ways. Some are large frameworks (like Angular) that provides a blue print for our entire application where others are small libraries (like Backbone and CanJS) that mostly provides helper classes that mix and match freely with other tools. Some frameworks rely on separate template engines (like Mustache) for rendering the HTML (like CanJS) where other frameworks have innovated HTML rendering more drastically (Angular relies on custom ng-attributes in the DOM, React renders into a virtual DOM).

A good way to compare different MVC frameworks is to look at the code behind the different showcase implementations of a ToDo-application provided by TodoMVC. Compare the terminology used by each framework and the structure and testability of the code.

Web components, which is an upcoming W3C standards, might impact how MVC frameworks will evolve in the future. Some projects provide tools and polyfills for exploring web components today, like Polymer.

I have pretty good experience with both CanJS and Backbone. I am not too happy with large frameworks, because of the potential cost in switching to another framework, should I wish to do so one day. However, I am intrigued by the idea of a virtual DOM as provided by frameworks like React.

Question 6: How do we deploy upgrades?

One of the important means to front-end performance is to leverage caching of payloads. The browser can cache payloads that it has loaded previously to avoid downloading them again and CDNs can cache payloads on servers in low latency proximity to the browser to speed-up the download.

However, with caching, we need to have a strategy of cache invalidation. When we want to deploy a new version of our application, we need a way to tell CDNs and browsers that they should refresh the payloads. If we want zero-downtime upgrades we also need to figure out a way to do atomic upgrades to prevent a client from downloading some payloads of the previous version and other payloads from the next version. One idea for solving this problem is to include version information, like a content hash, in the names of resources.

Wrapping up

These are the questions that I currently find important to think about. Of course there are other questions that might be important too, such as data binding and client side error reporting.

One interesting question that I didn't include is how to make an application work real-time: ensure that updates from one user is immediately visible to other users that happen to look at the same piece of data. This is highly relevant for computer games, but also collaborative applications like planning tools. One framework that provides a take on this question is Meteor.

Real-time is an area that I would love to get a chance to investigate further.

All these questions are quite interesting but when starting a new project we should always strive to Keep It Simple. We only need to solve the problems that are important to us. We might not need zero-downtime upgrades or CDNs. We might not expect to have complex asynchronous code. We might only target a single browser. Our team might already have good experience with a particular unit testing tool or MVC framework. Let's always leverage such simplifying circumstances to our benefit so we have more time to spend on solving the most important problems.

Discuss on Twitter