(This examples imply opinionated workflows or local configurations which may not apply to your case. Good projects have good documentation. ALWAYS refer to them for official positions of the projects themselves. The information here are for my own purposes, probably won't be updated to reflect the changes of the projects used and most likely will be outdated when you check it. Again, ALWAYS refer to the project documentation.)
There are two main ways of developing Python apps with Docker.
- develop outside the container and, after it's done (there is no real done
for software), copy all the project with a
COPY
directive inDockerfile
and regenerate the virtualenv based onrequirements.txt
- develop "inside" the container in a development container on a newly created volume and, after that, use this volume to create a production container
The first approach is easier and basicly documented by the Docker documentation itself. However, it's more error prone. Why?
(...)
The second approach is what I use:
The Dockerfile
of the dev container:
# dev image
FROM fedora:28
WORKDIR /app
# needed to build scrapy and djangorestframework
RUN dnf -yq install redhat-rpm-config python3-devel gcc
RUN useradd -m -d/home/dbolgheroni -u1000 dbolgheroni
RUN chown dbolgheroni:dbolgheroni /app
USER dbolgheroni:dbolgheroni
EXPOSE 8000
CMD /bin/bash
Build the image:
$ sudo docker -t myproj:dev .
The Dockerfile
is self explicative but the whys of this approach. Creating a Python virtualenv is prone to errors. Some modules involve compiling a lot of C files, which depends on compiling these files on a different platform.
Doing so in the container already helps you solve these problems earlier, not later.
Run container:
$ sudo docker run --name myproj-dev -it -p 0.0.0.0:8000:8000/tcp \
> -v /home/dbolgheroni/myproj/myproj/:/app/myproj/:Z \
> myproj:dev \
> /bin/bash
The Dockerfile
for the production container comes later.