Installing an application on a VM is like manually setting up a tent
Installing an application with Docker is like a popup tent
We'll ignore taking down the tents for the purposes of this analogy
Explanation Part 1; Manual Tent
Imagine the ground is a nice clean VM that we just made
We gotta set up the tent poles (requirements) and the fabric (the application)
Putting tent poles into tent fabric is hard and might take multiple people
If the poles break / need upgrading it's hard to do
Explanation Part 2; Popup Tent
Imagine the ground is (again) a nice clean VM that we just made
We have to install 1 (usually) thing on it; Docker (no analogy here sorry)
The tent is in its bag (Docker image) ready to be opened
We open the bag and the popup tent sets itself up for us (Docker container)
2.5 More on Images
In the output of the last command I gave you, you might have seen something like this;
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
This is Docker saying
"I don't have the 'hello-world' image on this machine, let me go get it"
Where does it get images?
If only there was some kind of website where people could upload Docker images for various widely used applications...
The Docker Hub!
2.8 Building our own images
"So the Docker Hub is great but it doesn't have Docker images for my stuff"
Of course it doesn't, it's on you to make images for your stuff
How? Let me show you!
I want to make a standalone version using
I want to use Docker to deploy it
I guarantee nobody has made one for me since it's a new project
Step 1: Dockerfile
First thing's first, we need a Dockerfile
This is a set of instructions for Docker to follow to build my image
We need to copy the code into the image, install requirements, and set the command that the image will run
Step 1: Dockerfile
# Base image is the python 3.7 docker image
# Set the working directory inside the image
# Copy from the build context to the image
COPY . .
# Install requirements
RUN pip install -r requirements.txt
# Load the latest submodules
RUN git submodule init
RUN git submodule update --remote
# Expose port 5000
# Set the entrypoint (CMD also works here)
ENTRYPOINT python3.7 app.py
Step 2: Building the image
Now that we have a Dockerfile, we can build our own image
In our project directory we can run the following;
docker build -t techtalks-flask .
Step 3: Running the image
After building our image, we can verify it is there using;
To then run the image we need to do the following;
docker run -p 5000:5000 -d techtalks-flask
Step 4: Test
Last time I checked, it works, so if it doesn't for you it's your fault >.>
Also you can use the following command to list all running containers;
# show running containers
# show all containers, dead or alive
docker ps -a
0.2 Some other cool stuff
Before I spill some company secrets, I want to share a couple of other cool things with you
Run this from the project directory;
docker run -p 5000:5000 -v $(pwd):/home/freyamade/public_html -d freyamade
templates/index.html, reload the page and you'll see it update in real time!
This is handy if you have your application Dockerised already and want to do development work
-v + web development
Working on a new web project but don't want to install Apache on your machine to test?
# Run from inside project directory, then go to localhost:8080
docker run -v $(pwd):/usr/local/apache2/htdocs/ -p 8080:80 -d httpd
What if you have an application with a lot of required external services (sql, load balancer, etc)?
You could set up all the containers yourself manually...
Or you could use Docker Compose (installed separately)
techtalks-flask + Compose
Let's say I add some reason to use
redis in the
I could have the following compose file;
And then bring up both images using;
docker-compose up -d
Kubernetes (k8s) is an open-source system for automating deployment, scaling, and management of containerized applications.
3. Let's spill some trade secrets
Why is the second one better?
If one server goes down, we don't lose the API
We now have multiple servers all able to handle requests instead of one
If all the servers go down, we lose a single application, not all of them*
We needed to test that our system actually made an improvement.
List 100 Users from Py2 Live and Py3 Stage APIs one million times, calculate average time locally.
Speed increase from Python2 to Python3; 57x
Speed Increase (cont'd)
Speed increase mostly due to architectural changes.
Original API running as monolith on single server.
New API running as clusters of smaller servers with load balancing.
Old Deployment System
Gitlab CI automates deployment when code is pushed to specific branches.
Old method of deployment;
SSH onto appropriate server,
git fetch origin, and
git reset --hard <branch>
Most of the time, deployment scripts did not contain a job to update requirements
New Deployment System
Deployment still automated but handled differently
Gitlab CI triggers Docker builds from specific branches
After the image(s) are built, we deploy them to the appropriate cluster(s)
Rolling release on the cluster; new images brought up before bringing down old ones