Dockerfile example
This example shows how to build a Dockerfile and publish it on Harbor.
Custom Jupyter Notebook image
In this example, we will build a custom Jupyter Notebook image with specific (outdated) versions of Python, TensorFlow, PyTorch, and Nvidia CUDA.
# Base image with CUDA and Ubuntu
FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
# Prevent interactive prompts from apt-get
ARG DEBIAN_FRONTEND=noninteractive
# Install required public keys
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys A4B469963BF863CC
# Update and install necessary packages
RUN apt-get update && apt-get install -y --no-install-recommends \
software-properties-common \
build-essential \
libssl-dev \
libffi-dev \
curl \
&& add-apt-repository ppa:deadsnakes/ppa -y \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
python3.7 \
python3.7-dev \
python3.7-distutils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Set Python 3.7 as the default version, then update pip and setuptools
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1 \
&& curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py \
&& python3.7 get-pip.py \
&& rm get-pip.py \
&& python3.7 -m pip install --upgrade pip setuptools
# Install TensorFlow 1.15.0, its dependencies and Jupyter
RUN pip install numpy==1.16.4 protobuf==3.20.1 tensorflow-gpu==1.15.0 jupyter
# Install PyTorch 1.9.1 with GPU support
RUN pip install torch==1.9.1+cu102 -f https://download.pytorch.org/whl/cu102/torch_stable.html
# Expose port 8888 for Jupyter notebook
EXPOSE 8888
# Set working directory
WORKDIR /tf
# Set default command to start Jupyter notebook
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--no-browser", "--allow-root"]
Build the Dockerfile
in the same directory, specifying the image name and version
Try running the image locally (without Nvidia GPU) to make sure it starts correctly
[I 09:41:41.089 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 09:41:41.191 NotebookApp] Serving notebooks from local directory: /tf
[I 09:41:41.191 NotebookApp] Jupyter Notebook 6.5.3 is running at:
[I 09:41:41.191 NotebookApp] http://9494fdf7e870:8888/?token=149305c10aef7b652d2fddedcb8b7266b67b4ad142281223
[I 09:41:41.191 NotebookApp] or http://127.0.0.1:8888/?token=149305c10aef7b652d2fddedcb8b7266b67b4ad142281223
[I 09:41:41.191 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 09:41:41.193 NotebookApp]
To access the notebook, open this file in a browser:
file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
Or copy and paste one of these URLs:
http://9494fdf7e870:8888/?token=149305c10aef7b652d2fddedcb8b7266b67b4ad142281223
or http://127.0.0.1:8888/?token=149305c10aef7b652d2fddedcb8b7266b67b4ad142281223
Open http://127.0.0.1:8888/?token=XXX in your browser to access the Jupyter Notebook.
Tag the image with the Harbor registry URL and your project name
Push the image to Harbor
You can now use the image registry.ice.ri.se/myproject/custom-jupyter:0.0.1 in your Rancher Jupyter Node application.
Warning
This Dockerfile is only an example. Getting outdated libraries to work correctly is left as an exercise for the reader.
Open Data Cube image
The Open Data Cube is a software library for working with large spatiotemporal data sets. It is used to manage, index, and query large collections of satellite imagery and other Earth observation data.
In this example, we will build a Docker image with the Open Data Cube and Jupyter Notebook installed. The image also has TensorFlow and GPU support.
FROM tensorflow/tensorflow:latest-gpu-jupyter
# Install Open Data Cube
RUN apt-get update && apt-get install -y libpq-dev python3-dev
RUN pip install --upgrade pip && pip install datacube
# Write contents to /etc/datacube.conf
RUN echo $'[datacube]\n\
db_database: odc18 \n\
db_hostname: 10.10.116.26 \n\
db_port: 30257 \n\
db_username: hack \n\
db_password: Sg_TahLcwxQiId_s \n\
' > /etc/datacube.conf
# Set environment variables
ENV DATACUBE_CONFIG_PATH=/etc/datacube.conf
ENV AWS_S3_ENDPOINT=10.10.104.34
ENV AWS_VIRTUAL_HOSTING=FALSE
ENV AWS_ACCESS_KEY_ID=TGP66WXOY4TV3QUTN4CS
ENV AWS_SECRET_ACCESS_KEY=ibJ5NgN62TlZxMOGA2MNuOj4NhSvfYwqUDSN2L4f
ENV GDAL_HTTP_UNSAFESSL=YES
ENV GDAL_DISABLE_READDIR_ON_OPEN=YES
# Expose port 8888 for Jupyter notebook
EXPOSE 8888
# Set working directory
WORKDIR /tf
# Set default command to start Jupyter notebook
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--no-browser", "--allow-root"]
Change the db_hostname
, db_username
, db_password
, AWS_S3_ENDPOINT
, AWS_ACCESS_KEY_ID
, and AWS_SECRET_ACCESS_KEY
to match your environment.
Build, tag, and push the image to Harbor as described in the previous section.
Makefile
Simplify the build process using a Makefile. Create it in the same directory as the Dockerfile
with the following content:
REGISTRY ?= registry.ice.ri.se
PROJECT ?= myproject
NAME ?= custom-jupyter
VERSION ?= 0.0.1
IMAGE ?= $(NAME):$(VERSION)
IMAGE_LATEST ?= $(NAME):latest
.PHONY: build
build:
docker build -t $(IMAGE) . && \
docker tag $(IMAGE) $(REGISTRY)/$(PROJECT)/$(IMAGE) && \
docker tag $(IMAGE) $(REGISTRY)/$(PROJECT)/$(IMAGE_LATEST)
.PHONY: publish
publish:
docker push -a $(REGISTRY)/$(PROJECT)/$(NAME)
.PHONY: clean
clean:
docker image rm $(IMAGE)
Note that Makefile
requires tab indentation, not spaces.
Run make build
with VERSION
as an environment variable, for example
Push the image to Harbor