My intro to Docker - Part 3 of 5
So far in this little blog series about Docker, I have covered what Docker is, how it works, how to get a container up and running using pre-built images, as well as your own images. But so far, it has all been about setting up a single container with some form of application running inside it. What if we have a more complicated scenario? What if we have a couple of different things we want to run together? Maybe we want to run our ASP.NET Core app that we built in the previous post behind an nginx instance instead of exposing the Kestrel server to the internet… Well, obviously Docker has us covered.
However, before we go any further, I just want to mention that I will only be covering something called docker-compose in this post. This can be used to create a stack of containers that are set started and stopped together. I will not be covering distributing the application across several nodes this time. There will be more about that later. And even if that is probably the end goal in a lot of cases, being able to just run on a single host can be useful as well. Especially while developing stuff.
What is docker-compose?
When you installed Docker for Windows or Docker for Mac, you automatically got some extra tools installed. One of them is docker-compose, which is a tool for setting up several containers together in a stack, while configuring their network etc. Basically setting up and configuring a set of containers/apps that work together.
It does this by using a file called docker-compose.yml. At least it’s called that by default. You can pass in another name asa parameter to docker-compose if you want to… And as the file extension hints, it is a YAML file. This means that it defines the services we want to have in our application stack using YAML syntax. And yes, we are now talking about services. A service is basically a container. It is a bit more complicated that that, but for this you can see it as a container.
Configuring an application stack using docker-compose In the previous post, I created a simple ASP.NET Core application that we could run in a container. Then we created an image called myimage, containing that application. What I want now, is to protect that application behind an nginx reverse proxy server. This means that I need 2 containers. One running nginx, exposing a port the a client can browse to, and one running the actual application, which nginx proxies requests to.
But before I can set up my stack, I need to set up the nginx image I want to use. And I want to have that set up placed in a sibling directory to the application. So I’ll create the following folder structure
DockerDemo
- DockerApp
- nginx
And inside the nginx folder I create 2 files. One called nginx.conf and one called dockerfile. Inside the nginx.conf I need to add the configuration that nginx should use. And in this case, it looks like this
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream web {
server web:80;
}
server {
listen 80;
location / {
proxy_pass http://web;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
The only thing that is really interesting here is the “proxy_pass http://web”. This is the line that tells nginx to proxy all calls to a server called “web”. This will be a DNS name set up by docker-compose, and represent the container running the ASP.NET Core app.
The dockerfile contains the following
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
which tells Docker that it should use the nginx:alpine image as the base image, and then just add the nginx.conf file to the /etc/nginx/ folder.
That’s it for setting up the nginx image. The next step is to set up the docker-compose stuff to get our stack up and running. So I start out by creating a new file called docker-compose.yml in the DockerDemo root folder. I need it to be a sibling to both the DockerApp and nginx folders. It’s in this file I’m going to define the services that make up my application stack. But before I can do that, I need to define what version this file uses… So I’ll add a version entry like this
version: 3
Then I need to configure my services. So I’ll start out with the nginx service, which I set up by adding the following
services:
nginx:
build:
context: ./nginx
ports:
- "8080:80"
This tells docker-compose that I want a service called nginx, built using the dockerfile in the nginx directory, and mapping the port 8080 on the host to port 80 on the container.
FInally, I add the web service as well, by adding
services: … web: image: myimage
As you can see here, I’m adding a service called web that should be based on the image called myimage. And that’s actually everything I need. docker-compose will automatically set up a network for all the services defined, and make sure that they can communicate with each other using the service names. So when nginx proxies calls to http://web, it redirects to this service in this application stack.
With all my files in place I can go ahead and call
docker-compose up -d
to start up the required services in detached mode, which means that it wont attach the output from the containers to your terminal. If you want to see the output from the containers, which can be really helpful for debugging, just omit the -d flag.
Once it has built the nginx image and started up both the containers, you can just open a browser and browse to http://localhost:8080 to see the result. It’s not very impressive as such, but remember that you just set up a reverse proxy server and a 2 service application stack in just a couple of minutes. And adding a SQL database and/or redis cache is just as easy.
When you are done playing around with the application, you can just run
docker-compose down
to stop all the services and tear down the containers. The only thing left is an nginx image called dockerdemo_nginx, which is the one that was automatically created for us by docker-compose. If you want to get rid of it, just run
docker rmi dockerdemo_nginx
That’s it!
There is obviously a whole heap more to docker-compose, but now you at least know the basics of how you can set up a stack of services that can all communicate with each other in an easy way.
In the next post I’ll be looking at how we can set up a Microsoft SQL Server instance inside of a Docker container. Yes…you read it right…a Microsoft SQL Server running on Linux in a Docker container!