Using Docker for local TypeScript development

By Matt Chapman

Tools like Node Version Manager (NVM) have made it far easier to develop Node applications with differing versions. But things can get a little more complicated to manage as soon as you start to need extra services such as databases, caches, or message brokers. 

Your system needs these services to work, but it’s not ideal when engineers are having to manage various versions of other software on their own development machines. 

Luckily Docker is around to help alleviate these issues, but it’s not always clear how Docker can best be used to approach TypeScript-based applications.

In this article, we’re going to set up a Docker-based development environment that can be distributed with your project to ensure every engineer has the right tools available for the job, with the least amount of effort. 

We'll be looking at how to tackle a common problem when using TypeScript: the challenge of watching for changes and recompiling on the fly whilst using Docker for local development.

You can download Docker for your chosen operating system here.


The only thing you’ll need for this tutorial is Docker. Although this article was written with macOS in mind, there’s no reason it shouldn’t work in Linux- and Windows-based installations as well.

You’ll also need basic knowledge of Node.js and NPM, and a local installation of Node.js to get things started.

Project set-up

Application dependencies

To start, we’re going to really quickly set up a Koa application with TypeScript. We’re only going to skim over this part, but if you want to learn more so check out this article on how to build a basic API with TypeScript, Koa, and TypeORM.

First, create a directory for your project and create a Node project:
npm init -y.

Next, we’ll install our base dependencies (we’re making a simple demo app for now, so we don’t need a router or anything like that):

npm install koa

Next, we’ll install our basic development dependencies:

npm install -D typescript @types/koa

Finally, we need to create our basic TypeScript configuration. Create a tsconfig.json file in the root directory and paste the following:

  "compilerOptions": {
    "target": "ES2017",
    "module": "commonjs",
    "lib": ["es2017"],
    "outDir": "dist",
    "rootDir": "src",
    "noImplicitAny": true,
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true,

This is enough to get our application working. You may have noticed that we haven’t installed the TypeScript definitions for Node. This is by design since we’re going to do it later on.

Get our basic app working

First, we’ll add a couple of scripts to our package.json to make things a little simpler. Add the following to the scripts key in your package.json:

"scripts": {
  "build": "tsc",
  "start": "tsc -w --preserveWatchOutput",

These scripts will either build the application once, or build the application and watch for changes, depending on which one you run.

Create a directory named src, and create an app.ts file. Inside that file, paste the following code:

import * as Koa from 'koa';

const app = new Koa();

app.use(async ctx => {
  ctx.body = 'Hello World';

app.listen(process.env.PORT || 3000);

You can test this by running npm run build in the root directory. This will transpile the TypeScript file down to JavaScript. Then run node dist/app.js to spin up the server. Visiting in your browser should now show ‘Hello World’.

Dockerize the environment


Before we start with Docker, we need to set up PM2. PM2 is a process manager for Node applications. It’s a super powerful tool that not only keeps your applications alive, but also helps with clustering, memory management, and – in our case – development.
PM2 is controlled with environment files. Create a file named ecosystem.config.js in the project root and paste the following:

module.exports = {
  apps: [{
    name: 'app',
    script: 'dist/app.js',
    instances: "max",
    autorestart: true,
    watch: 'dist/**/*.js',
    max_memory_restart: '1G',
    env: {
      NODE_ENV: 'development'
    env_production: {
      NODE_ENV: 'production'
    name: 'app-watcher',
    script: 'npm start',
    instances: 1,
    autorestart: true,
    watch: 'tsconfig.json',
    env: {
      NODE_ENV: 'development'
    env_production: {
      NODE_ENV: 'production'

This creates two process definitions. The first is the Koa application itself, and the second is the watch task. 


To get the initial Docker set-up working, we need a couple of files. First we’ll put together a .dockerignore file. The syntax is similar to a .gitignore file, and at a high level it prevents files from being copied into your Docker container. 

We won’t go into it any more, but if you want, you can read more about it in the official documentation.

In the following snippet, we’re preventing any Git metadata, installed Node modules, or the compiled application from being copied into the container if they exist on the host.



Next we need a Dockerfile to hold some global dependencies and compile the app for the first time. In the root of the application, create a file named Dockerfileand paste the following code:

FROM node:lts-alpine


RUN apk add --no-cache bash

RUN npm install -g pm2

COPY package.json package-lock.json ./

RUN npm install

COPY . ./

RUN npm run build

CMD [ "pm2-runtime", "start", "ecosystem.config.js" ]

Let’s step through this:

FROM node:lts-alpine

This will use the Alpine image for the latest long-term support version of Node. Alpine images are significantly smaller than normal images as they don’t contain any of the extra tools you may normally expect. For some languages this can be problematic, but for Node it’s fine and means that we only need to pull down a ~24MB base image instead of ~352MB for the full Node image.


This sets the current working directory to /app.

RUN apk add --no-cache bash

This will install Bash. It’s not a strict requirement, but it makes things a bit more familiar when we connect to the container to run our NPM commands later.

RUN npm install -g pm2

This line installs the PM2 process manager to ensure that our containers don’t fall over during development. If you’re using PM2 on your project as a project dependency you can remove this and add a more standard NPM script if you’d like.

COPY package.json package-lock.json ./

Here we copy the package.json and package-lock.json files into the container. We copy these files and only these files so we can make use of Docker’s built in layer caching. If we have to rebuild the image and the dependencies haven’t changed, then the next step won’t need to be run, which speeds up the build by a significantly large factor.

RUN npm install

Install our application dependencies.

COPY . ./

This copies the remaining files from your application into your container.

RUN npm run build

This runs the build script to ensure our container always starts with a copy of the transpiled application.

CMD [ "pm2-runtime", "start", "ecosystem.config.js", "--only=app" ]

This starts the runtime version of PM2 using our ecosystem file. You’ll notice the --only=app at the end. This means that although we’re specifying multiple applications, only the main Koa app will be started by default. If you’re using the same container for production, then you’re likely tfo want a second ecosystem file instead of using --only.


Lastly, we need a docker-compose.yml file to make it easier to control our required services. We’re going to create two services, one for each application that we’ve defined in our PM2 ecosystem file. PM2 will happily run both applications at once, but as we’re using Docker it makes sense to follow the Docker philosophy and have each container running one application.

Create a docker-compose.yml and paste the following:

version: '3'


    build: ./
    image: typescript-docker-koa
    restart: always
    volumes: &appvolumes
      - ./:/app:delegated
      - '3000:3000'
    environment: &appenv
      PORT: 3000

    build: ./
    image: typescript-docker-koa
      - app
    restart: always
    volumes: *appvolumes
    environment: *appenv
    command: ["pm2-runtime", "start", "ecosystem.config.js", "--only=app-watcher"]

Running the application

This is everything we need to get our development environment up and running. So now we just need to build it and run it. 

To do this, run:

docker-compose up --build

This will build-out our container and start everything running. As we’re using the runtime version of PM2, you should see output from each process in the stdout of the application containers. 

To check if everything is working as expected, visit you should see our ‘Hello world’ message. Now change it to something else and save. The TypeScript watcher container will recompile the application, and the application container will pick up the change and restart the Koa app. 

Running NPM in the container

As I mentioned earlier, we still need to install our Node TypeScript definitions. To do this we’re going to connect to Bash in the application container and install them there to ensure that we’re using the definition based on the container Node version, and not the host.

This can be achieved with the following:

docker-compose exec app bash

Then, once you’re inside the container, run the following:

npm install -D @types/node

From now on you’re going to want to install your dependencies from within the container. This ensures that any binaries will be compiled for the container environment, and not the host environment. 

However, as your node_modules folder is configured to sync with the host container, your local environment will still benefit from the code completion etc. provided by the TypeScript definitions.


Although we didn’t utilise Docker for a database or in-memory cache in this tutorial, we explored how to tackle a common problem when using TypeScript: the problem of watching for changes and recompiling on the fly whilst using Docker for local development.

When using the above setup we can distribute a consistent development environment to every engineer. 

We can also easily add services as our application requires them such as persistence with PostgreSQL, caching with Redis, or a message broker such as RabbitMQ whilst ensuring that every engineer is working in the same environment.

You can find the code from this article here.