The Universal Docker Approach: Local Live Reload to Production in One Dockerfile

If you use Docker not only in production but also for local development of your Go projects (actually, the technology stack can be any, I'm just explaining using my Go project as an example), consider that you don't necessarily need to maintain multiple Dockerfiles, one for each environment. By using a multi-stage build Dockerfile, you can manage with just one that will work everywhere, providing a small image size for production and live reloading for local development. In this article, we'll examine an example of such a Dockerfile that I use for one of my projects.

Let's start with why we would run an application locally in a Docker container at all, when go run main.go does its job perfectly well. There can be many reasons for this; I'll describe mine. Firstly, I needed a persistent database layer. These days, it's not necessary to deploy a database locally; you can spin up one in the cloud with just a couple of clicks using, for example, Supabase. However, such solutions often put the database to sleep if they don't detect any updates for a certain period. If there's active product development, this isn't a problem, but imagine you're returning from vacation, sit down to work on your project, start your local server, and see a database connection error. You'll have to remember that the cloud provider put the database to sleep, and then go and wake it up. All of this is an extra waste of time that I try to avoid. My solution is to add a few lines to the docker-compose.yml file and launch a database locally in seconds. There's no need to ensure scaling and backups locally - so this solution is ideal.

The second reason why you might need to run an application specifically in a Docker container is dependency on external libraries. My project, in particular, is an image converter, resizer, and optimizer that supports several popular formats. Naturally, it depends on many libraries that directly perform the conversion from one format to another. I don't want to install all of this on my machine, so it's better to use Docker for isolation.

1. docker-compose.yml#

You've probably noticed that I mentioned Docker Compose? Yes, indeed, we'll be using a combination of docker-compose.yaml and Dockerfile, but this combination is, as they say, made in heaven. Let's start writing these files step by step. We'll begin with docker-compose.yaml. Our setup will consist of two services: app and db. Let's write it down like this:

# docker-compose.yml
services:
  app:
  db:

Let's first set up the database layer, and then we'll deal with the application.

# docker-compose.yml
services:
  db:
    image: postgres
    container_name: cnvrt-db
    environment:
      POSTGRES_USER: cnvrt
      POSTGRES_PASSWORD: cnvrt
      POSTGRES_DB: cnvrt
    ports:
      - ${DB_PORT}:${DB_PORT}

The names of the settings speak for themselves. We set the image name, container name, add several environment variables for database access, and a pair of external and internal ports. Did you notice that the ports will be set from the DB_PORT environment variable? This is done because this value is used both in the application service and in the database service, and to avoid repetition, it's easier to refer to the environment variable. Of course, don't forget to add these variables to the environment before running Docker Compose:

DB_PORT=5432 docker-compose up

Or even better, to avoid doing this every time, I added the following block at the top of my Makefile:

# Makefile
exist := $(wildcard .envrc)
ifneq ($(strip $(exist)),)
  include .envrc
endif

Here, it checks if the .envrc file exists at the specified path, and if so, it includes all the content from it in the Makefile. Inside the .envrc, I have all the local environment variables used in the project written out:

# .envrc
...
export DB_PORT=5432
export APP_PORT=3000
export DB_DSN=postgres://cnvrt:cnvrt@db:5432/cnvrt?sslmode=disable
...

And there's also a command in the Makefile that runs Docker Compose:

# Makefile
docker/run:
	docker-compose up --remove-orphans

Thus, Docker Compose always has access to all the necessary environment variables.

Don't forget to add .envrc to .gitignore, just like any .env file, to avoid accidentally committing your private data to a public repository.

Next, let's describe the application service:

# docker-compose.yml
services:
  app:
    container_name: cnvrt-app
    build:
      context: .
      dockerfile: Dockerfile
      target: development
    ports:
      - ${APP_PORT}:${APP_PORT}
    depends_on:
      - db
    environment:
      - DB_DSN=${DB_DSN}
    volumes:
      - ./:/app/

What is important to note here:

  1. We specify the Dockerfile that Docker Compose should use to build the container.
  2. Next, we set the target. This is a key point. Since the Dockerfile, which we'll look at later, uses a multi-stage build setup, it's crucial to specify which stage to use for local development. In my case, it's the development stage.
  3. We indicate that this service depends on the db service.
  4. We configure a volume that Air will use to rebuild the application locally whenever files are modified.

2. Dockerfile. Stage builder.#

Let's move on to the Dockerfile. It's quite lengthy, so I'll skip the parts that aren't essential for the narrative, but you can always visit the project's repository to see what I've left out.

Everything starts with the builder stage, which is based on a Debian image, where I install all the necessary dependencies for both the build and runtime.

# Dockerfile
ARG GOLANG_VERSION=1.22.5
FROM golang:${GOLANG_VERSION}-bookworm as builder
ARG TARGETARCH=arm64

SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN DEBIAN_FRONTEND=noninteractive \ 
  apt-get update && \
  apt-get install --no-install-recommends -y \
  ca-certificates \
  automake build-essential curl && \
  curl -L https://github.com/golang-migrate/migrate/releases/download/v4.17.0/migrate.linux-${TARGETARCH}.tar.gz | tar xvz && \
    mv migrate /usr/local/bin/migrate && \
  ...the rest of dependencies are installed and built here
  
WORKDIR /app 
COPY . .
RUN go mod download && go build -o "${GOPATH}"/bin/cnvrt ./cmd/api/main.go

After installing all the dependencies, we build the application into an executable file, which we will then simply copy over to the production stage.

3. Dockerfile. Stage production.#

The production stage looks as follows:

# Dockerfile

# Use a lighter Debian image
FROM debian:bookworm-slim as production

# Copy everything that can be copied from the builder stage
COPY --from=builder /go/bin/cnvrt /usr/local/bin/cnvrt

# Install runtime dependencies
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update && \
apt-get install --no-install-recommends -y \
procps libglib2.0-0 libjpeg62-turbo && \
...the rest of dependencies are installed and built here
ln -s /usr/lib/"$(uname -m)"-linux-gnu/libjemalloc.so.2 /usr/local/lib/libjemalloc.so && \
apt-get autoremove -y && \
apt-get autoclean && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
chown -R nobody:nogroup /app && chmod 755 /app

# Use the restricted nobody user account
USER nobody

EXPOSE ${PORT}

# Run the executable with all the necessary flags
CMD ["/bin/sh", "-c", "./usr/local/bin/cnvrt \
  -env=\"${ENV}\" \
  -port=\"${PORT}\" \
  -db-dsn=\"${DB_DSN}\""]

For the production image, we can use the lightweight debian-slim image, stripped of most documentation and language support, which we definitely won’t need. We copy the necessary executables into it, which will work seamlessly since the architecture is fully preserved.

Next, we install production-ready dependency packages for image conversion. We set the appropriate permissions for the application directory and choose to run the application as the nobody user with minimal privileges.

Then, we expose the application port and run the executable, passing in all the necessary flags for the application. These flags are copied from the environment variables that the Makefile has already made available for us.

At this point, the production stage is complete, and we move on to the final stage, which will be used for local development.

4. Dockerfile. Stage development.#

# Dockerfile
FROM builder AS development

RUN go install github.com/air-verse/air@latest

EXPOSE ${PORT}

CMD ["/bin/sh", "-c", "air \
  --build.cmd 'make build' \
  --build.bin 'make bin' \
  --build.delay '100' \
  --build.exclude_dir 'uploads, tmp' \
  --build.include_ext 'go, tpl, tmpl, html, yml, yaml' \
  --misc.clean_on_exit 'true'"]

In local development, image size isn't as critical, so to save time, we utilize the resources that were already installed and built during the builder stage. Additionally, we install the Go package air to enable live reload when files are changed, which is very convenient during local development. We also expose the port and run air, passing the configuration parameters directly. To make the startup command cleaner, you could use a .air.toml configuration file, which air will automatically read. However, I’m not a fan of having too many configuration files in the repository, so when possible, I prefer running the application by passing the parameters directly. You can read more about configuration parameters in the air project repository.

Now that we've covered docker-compose and the Dockerfile, the last step is understanding how to build the respective stages locally and in production. In my case, I use make locally, so it's as simple as running make docker/run. In production, we also need to specify the necessary stage during the build with the following command: DOCKER_BUILDKIT=1 docker build -t cnvrt --target production ..

5. In conclusion.#

Today, you learned how to leverage Docker and Docker Compose to streamline working with a project in both local and production environments, all while using a single Dockerfile. The configurations presented here are far from perfect and offer plenty of room for improvement, but they can serve as a starting point if you have similar requirements for your application.

As a reminder, you can find the source code and all the files we discussed today in the project repository at https://github.com/prplx/cnvrt-pics-server. You can also use the application itself for convenient image conversion, resizing, and optimization at https://cnvrt.pics/.

Until next time!