Docker Reverse Proxying Puzzle

Summary

Several issues related to reverse proxying Docker containers were solved only after I learned more about Docker’s bridges and how to debug faulty containers. In this post, I expose the steps I went through to troubleshot and solve this problem, so that maybe it’ll save time to someone, somewhere.

Introduction

Docker is a great tool to quickly set up a development environment and reproduce some parts or the totality of a given infrastructure. I wanted to experiment with reverse proxying containers behind a single Nginx instance handling the routing. It’s a simple setup, but I quickly faced several issues I was only able to solve once I understood some details about how Docker’s bridges work.

What I Wanted To Do

  • I wanted to set up Nginx in a docker container reverse proxying Gitea (also as a docker container), making it reachable using a sub-url (http://ariona.local/git).

  • I also wanted to prevent the Gitea container from being accessed by the outside world. The container needed to be accessed only through Nginx.

Problem

Here is the initial puzzle, where fixing an issue created another :

  • Upon accessing the sub-url associated with the service, an Error 502 message was displayed in the browser no matter the configuration on nginx side.

  • If gitea was set to listen on 127.0.0.1, it could only be reached from inside the container, even if the ports (22, 3000) were published. The container was isolated, sure, but a tad too much.

  • If set to listen on 0.0.0.0 and the ports were published, the service was accessible everywhere on the network by calling host:3000.

  • If set to listen on 0.0.0.0 and the ports were not published, the service also couldn’t be accessed outside the docker container.

Solution

  • Set the listening HTTP address for Gitea to 0.0.0.0:3000.

  • Create a user-defined bridge network and associate both containers with it, so that they are isolated from the outside world and can resolves their mutual addresses using DNS query, which can be achieved as easily using the default bridge network.

  • Publish the standard HTTP (80) and HTTPS (443) ports of the nginx container, but not the one from Gitea. Both containers will be able to resolve each other addresses and establish connections on their exposed ports, while keeping gitea endpoint safe from the outside world, that is everything that is not inside the bridged network.

Caveat: Since I used Ansible to deploy the container, I don’t have a Dockerfile or docker-compose.yml to illustrate those steps. Here is the relevant playbook anyway. The relevant tasks are the one creating a user-defined bridge network_ariona and the one creating a container associated with this bridge.

- hosts: ariona.local
  become: yes
  gather_facts: no

  tasks:
    - name: ensure nginx volume folders exist
      file: path=/srv/nginx/conf.d state=directory recurse=yes

    - name: copy nginx configuration
      copy:
        src: ../../config/nginx/front.conf
        dest: /srv/nginx/conf.d/
        owner: root
        group: root
        mode: '0644'

    - name: ensure docker is started
      service:
        name: docker
        state: started
        enabled: yes

    - name: Create a network
      community.docker.docker_network:
        name: network_ariona

    - name: Create a nginx container with volume
      community.docker.docker_container:
        auto_remove:
        restart: yes
        name: nginx
        image: nginx:stable-alpine
        ports:
          - "80:80"
          - "443:443"
        volumes:
          - /srv/nginx/conf.d:/etc/nginx/conf.d
          - /var/www:/var/www
        networks:
          - name: network_ariona

    - name: test new nginx configuration file
      command: docker exec nginx sh -c "nginx -t"
      register: nginx_test_results

    - name: Restart nginx service
      when: nginx_test_results.stderr == ""
      service:
        name: nginx
        state: reloaded

How I Debugged It

Those command really helped for troubleshooting and precisely identify what was reachable and from where. docker inspect is useful to get the IP of the container, while docker logs reveals on which address the service inside the container is listening on.

# to see the ip of the container
docker inspect <containerName>
# to check on which address the service inside the container is listening on
docker logs <containerName>
# from both inside and outisde the containers :
curl <ip>:<port>
netstat -tan

What Is Needed To Know

Solving this problem requires to understand two things about Docker : What are user-defined bridges, and what is the difference between a published and an exposed port.

Regarding Bridges

Those bits of information from the documentation are what really helped me understand everything related to this problem:

On a user-defined bridge network, containers can resolve each other by name or alias. ... Containers connected to the same user-defined bridge network effectively expose all ports to each other. For a port to be accessible to containers or non-Docker hosts on different networks, that port must be published using the -p or --publish flag.

Regarding Ports

  • An exposed port is known by Docker to be the one on which the service running in the container is listening. The port is not accessible through outside the container. Exposing a port using EXPOSE in a Dockerfile is merely a way to document which port are used in this container.

  • A published port is accessible through the docker host, meaning the service is accessible from outside the container, being it from containers from other networks or other services outside Docker. Publishing a port binds the exposed port to a random or specific port on the docker host. This can be done using the -p option for docker or the ports section in docker-compose.yml if docker-compose is used.

Resources