Dockerfile Best Practices
Sources for Important Information:
- Google Cloud Best practices for building containers
- The importance of PID 1 in containers
- Google Cloud Run Container Runtime Contract
- Docker Multi-Stage Builds
Dockerfile Examples
Python
# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.7"
ARG DISTRO_VERSION="buster"
ARG PORT=8000
FROM python:${RUNTIME_VERSION}-slim-${DISTRO_VERSION}
# Include global args in this stage of the build
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# copy requirements.txt
COPY requirements.txt ./
# Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install --no-cache-dir -r requirements.txt
# RUN apt-get update && \
# apt-get -y --no-install-recommends install \
# gcc build-essential && \
# python${RUNTIME_VERSION} -m pip install --no-cache-dir -r requirements.txt && \
# apt-get purge -qy gcc build-essential && \
# apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# add the function code
COPY ./src/ ${FUNCTION_DIR}
# change python path if a lib needed to be added
# ENV PYTHONPATH=${FUNCTION_DIR}lib:$PYTHONPATH
ENTRYPOINT [ "python", "-m", "streamlit", "run" ]
CMD [ "serverless-perf-model/app.py" ]
ENV PORT $PORT
EXPOSE $PORT
Multi-Stage build with smaller Dockerfile but possibly issues with CLIs (source):
FROM python:3.7-slim AS compile-image
RUN apt-get update
RUN apt-get install -y --no-install-recommends build-essential gcc
COPY requirements.txt .
RUN pip install --user -r requirements.txt
COPY setup.py .
COPY myapp/ .
RUN pip install --user .
FROM python:3.7-slim AS build-image
COPY --from=compile-image /root/.local /root/.local
# Make sure scripts in .local are usable:
ENV PATH=/root/.local/bin:$PATH
CMD ['myapp']
Another multi-stage build (source):
FROM python:3.7-alpine as base
FROM base as builder
RUN mkdir /install
WORKDIR /install
COPY requirements.txt /requirements.txt
RUN pip install --install-option="--prefix=/install" -r /requirements.txt
FROM base
COPY --from=builder /install /usr/local
COPY src /app
WORKDIR /app
CMD ["gunicorn", "-w 4", "main:app"]
or using --target (source):
FROM python:3.8.9-alpine3.13 as pythonBuilder
WORKDIR /home/root/server
# any dependencies in python which requires a compiled c/c++ code (if any)
RUN apk update && apk add --update gcc libc-dev linux-headers libusb-dev
COPY ./local-project-folder .
RUN pip3 install --target=/home/root/server/dependencies -r requirements.txt
FROM python:3.8.9-alpine3.13
WORKDIR /home/root/server
# include runtime libraries (if any)
RUN apk update && apk add libusb-dev
COPY --from=pythonBuilder /home/root/server .
ENV PYTHONPATH="${PYTHONPATH}:/home/root/server/dependencies"
CMD "./server.py"
NodeJS
Sources:
# if you're doing anything beyond your local machine, please pin this to a specific version at https://hub.docker.com/_/node/
# FROM node:12-alpine also works here for a smaller image
FROM node:12-slim
# set our node environment, either development or production
# defaults to production, compose overrides this to development on build and run
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
# default to port 3000 for node, and 9229 and 9230 (tests) for debug
ARG PORT=3000
ENV PORT $PORT
EXPOSE $PORT 9229 9230
# you'll likely want the latest npm, regardless of node version, for speed and fixes
# but pin this version for the best stability
RUN npm i npm@latest -g
# install dependencies first, in a different location for easier app bind mounting for local development
# due to default /opt permissions we have to create the dir with root and change perms
RUN mkdir /opt/node_app && chown node:node /opt/node_app
WORKDIR /opt/node_app
# the official node image provides an unprivileged user as a security best practice
# but we have to manually enable it. We put it here so npm installs dependencies as the same
# user who runs the app.
# https://github.com/nodejs/docker-node/blob/master/docs/BestPractices.md#non-root-user
USER node
COPY --chown=node:node package.json package-lock.json* ./
RUN npm install --no-optional && npm cache clean --force
ENV PATH /opt/node_app/node_modules/.bin:$PATH
# check every 30s to ensure this service returns HTTP 200
HEALTHCHECK --interval=30s CMD node healthcheck.js
# copy in our source code last, as it changes the most
# copy in as node user, so permissions match what we need
WORKDIR /opt/node_app/app
COPY --chown=node:node . .
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
# if you want to use npm start instead, then use `docker run --init in production`
# so that signals are passed properly. Note the code in index.js is needed to catch Docker signals
# using node here is still more graceful stopping then npm with --init afaik
# I still can't come up with a good production way to run with npm and graceful shutdown
CMD [ "node", "./bin/www" ]
And the entrypoint can be something like the following:
#!/bin/bash
set -euo pipefail
# usage: file_env VAR [DEFAULT]
# ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
# "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
file_env 'MONGO_USERNAME'
file_env 'MONGO_PASSWORD'
exec "$@"
Golang
FROM golang:1.14-alpine as build
ENV CGO_ENABLED=0
WORKDIR /go/src/app
COPY src/. .
# for https certificates
RUN apk --no-cache add ca-certificates
RUN go get -d -v ./...
# RUN go install -v ./...
RUN go build -o app .
FROM scratch
WORKDIR /usr/bin/app
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /go/src/app/app ./app
COPY --from=build /go/src/app/public ./public
# later, create user app and run application as this user
# USER app
EXPOSE 80
VOLUME /tmp/
CMD ["/usr/bin/app/app"]
Misc
Conditional Multi-Stage Docker
sources: