Monday, October 4, 2021

Running Pod on Kubernetes

Introduction

I want to test the docker-desktop as K8 local cluster. This is not production deployment.

Using Kubectl

$ kubectl run nginx --image=nginx --restart=Never
pod/nginx created

This will create a pod  nginx pod. To make this pod accessible from browser, you can use  kubectl port-forward :

$ kubectl port-forward nginx 8888:80
Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80

Open the browser  http://localhost:8888/ :

You can log into the pod and check the logs as well.
$ kubectl logs nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2021/10/05 05:42:13 [notice] 1#1: using the "epoll" event method
2021/10/05 05:42:13 [notice] 1#1: nginx/1.21.3
2021/10/05 05:42:13 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6)
2021/10/05 05:42:13 [notice] 1#1: OS: Linux 5.10.47-linuxkit
2021/10/05 05:42:13 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/10/05 05:42:13 [notice] 1#1: start worker processes
2021/10/05 05:42:13 [notice] 1#1: start worker process 32
2021/10/05 05:42:13 [notice] 1#1: start worker process 33
2021/10/05 05:42:13 [notice] 1#1: start worker process 34
2021/10/05 05:42:13 [notice] 1#1: start worker process 35
2021/10/05 05:42:13 [notice] 1#1: start worker process 36
2021/10/05 05:42:13 [notice] 1#1: start worker process 37
127.0.0.1 - - [05/Oct/2021:05:43:38 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" "-"
127.0.0.1 - - [05/Oct/2021:05:43:38 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://localhost:8888/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" "-"
2021/10/05 05:43:38 [error] 34#34: *2 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "localhost:8888", referrer: "http://localhost:8888/"
127.0.0.1 - - [05/Oct/2021:05:43:40 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" "-"
127.0.0.1 - - [05/Oct/2021:05:43:41 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" "-"

Using YAML

You can create a pod using yaml file as well.
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - name: testpod
    image: alpine:3.5
    command: ["ping", "8.8.8.8"]
Save the above manifest to pod.yaml

$ kubectl apply -f pod.yaml
pod/testpod created
Once the pod is created you can check the pod status kubectl get po :
$ kubectl get po
NAME      READY   STATUS    RESTARTS   AGE
testpod   1/1     Running   0          4s
As this pod makes a ping to 8.8.8.8, you can check the pod running by seeing the log messages.
$ kubectl logs testpod
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=37 time=22.417 ms
64 bytes from 8.8.8.8: seq=1 ttl=37 time=14.821 ms
64 bytes from 8.8.8.8: seq=2 ttl=37 time=13.588 ms

We have seen we can run the pod using above 2 methods. 

Happy Coding!!

Saturday, October 2, 2021

Run Kubernetes on Google Kubernetes Engine

Introduction

I was interested to learn Kubernetes. I tried to run the K8s on Minikube as well on macOS. There are some limitations in minikube on macOS so I tried it on GKE.

Account Setup on GKE

I followed below steps to signup on GCP. 

Open the link https://console.cloud.google.com/ on browser. 



Sign in using your email and password.



You will have to agree the terms.


Click on Activate button top right corner.


Choose the country and organization needs. 

There will be 2 more steps to Verify by phone number  and  Add credit card details . There will be charges if you don't stop after trial period or you exhaust $300 credit. 

After signup you will land on the home page like below.



Google Kubernetes Engine

You can access GKE from left hand navigation by click on Kubernetes Engine >> Clusters. 


If you are using it first time then you will have to enable it for you.



Click on create and it will open below page


Click on Configure  against GKE Standard. It will open below. 


Click on Create  button on the bottom of the page. It will open cluster page and will start the cluster for you. 

In few minutes cluster will be running state. 


The green dot o on the status shows the cluster is up and running. You can check the cluster details by clicking on cluster-1  link under name. Refer below. 


The cluster has 3 nodes.


Google Cloud Shell 

We will cloud shell to launch k8. 

Click on icon top right corner Cloud Shell - Spring Boot on GCP 

Refer below screenshot.


It will take few minutes to provision cloud instance and then provide a shell in bottom of the page.


You can now start typing your commands on the shell.

gcloud container clusters get-credentials cluster-1 --zone "us-central1-c"

This will open a pop-up and ask you to  authorize . You can click on authorize  button. 

The cloud shell will now have access to connect with GKE cluster. The output on shell will look like below:

aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ gcloud container clusters get-credentials cluster-1 --zone "us-central1-c"
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$

Run below command to launch a hello-server using docker image from google container registry.

kubectl create deployment hello-server --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0

Output will be like below:

aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ kubectl create deployment hello-server --image=us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0
deployment.apps/hello-server created
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$
Expose the deployment via service.

kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
Output will be like below:

aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ kubectl expose deployment hello-server --type LoadBalancer --port 80 --target-port 8080
service/hello-server exposed
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$

Inspect and view the application

Inspect the running Pods by using kubectl get pods :

aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
hello-server-5bd6b6875f-p2z64   1/1     Running   0          6m46s
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$

You should see one hello-server Pod running on your cluster.

Inspect the hello-server Service by using  kubectl get service :

aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ kubectl get service
NAME           TYPE           CLUSTER-IP   EXTERNAL-IP    PORT(S)        AGE
hello-server   LoadBalancer   10.8.13.54   34.72.114.79   80:30778/TCP   6m
kubernetes     ClusterIP      10.8.0.1     <none>         443/TCP        33m
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$
From this command's output, copy the hello-server Service's external IP address from the EXTERNAL-IP column.
Note: You might need to wait several minutes before the Service's external IP address populates. If the application's external IP is <pending>, run kubectl get again.

View the application from your web browser by using the external IP address with the exposed port:
http://EXTERNAL_IP
As the external ip is 34.72.114.79.

http://34.72.114.79
You have just deployed a containerized web application to GKE.


Clean up 

To avoid incurring charges to your Google Cloud account for the resources used in this page, follow these steps.

Delete the application's Service by running using  kubectl delete :
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ kubectl delete service hello-server
service "hello-server" deleted
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$
This command deletes the Compute Engine load balancer that you created when you exposed the Deployment.

Delete your cluster by running  gcloud container clusters delete:

aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$ gcloud container clusters delete cluster-1 --zone "us-central1-c"
The following clusters will be deleted.
 - [cluster-1] in [us-central1-c]

Do you want to continue (Y/n)?  Y

Deleting cluster cluster-1...done.     
Deleted [https://container.googleapis.com/v1/projects/jovial-honor-327604/zones/us-central1-c/clusters/cluster-1].
aagarwal_jobs@cloudshell:~ (jovial-honor-327604)$
The cluster will be deleted once you run the delete command.



Happy Coding !!!



Saturday, September 25, 2021

Working with Multiple Containers using Docker Compose

Introduction

Recently for some project I have to use Grafana and Postgres. I was having the option to install them and use. But this seemed to me a tedious process. I thought of using the existing docker containers with docker-compose tool. The containers were up and available for use in few minutes. The containers were able to communicate among themselves and with the host laptop network.

Types of Mount in Docker

There are multiple types of mount available in docker:
  • Bind Mounts
  • Volumes

Bind Mount

Bind mounts have limited functionality compared to volumes. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full path on the host machine. The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist. Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available. If you are developing new Docker applications, consider using named volumes instead. You can’t use Docker CLI commands to directly manage bind mounts. It is also called as `Host-Mounted Volumes`. These are Below is the syntax for it. 

/host/path:/container/path

Below is the example of bind mount for docker-compose

version: '3.9'


services:

  grafana:

    image: grafana/grafana

    container_name: grafana

    ports:

      - 3000:3000

    links:

      - postgres

    volumes:

      - /Users/aagarwal/dev/grafana/grafana_data:/var/lib/grafana:rw

    

  postgres:

    image: postgres:9.6.6

    container_name: postgres

    environment:

      POSTGRES_USER: postgres     # define credentials

      POSTGRES_PASSWORD: postgres # define credentials

      POSTGRES_DB: postgres       # define database

    ports:

      - 5432:5432                 # Postgres port

    volumes:

      - /Users/aagarwal/dev/grafana/postgres_data:/var/lib/postgresql/data


networks:   grafana_network: driver: bridge

There are 2 volumes used in the above code. The host directories are made as the persistent store for the containers. Here we are using full absolute path for the host directories. These are bind mounts.

docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest

Use docker inspect devtest to verify that the bind mount was created correctly. Look for the Mounts section:

"Mounts": [
    {
        "Type": "bind",
        "Source": "/tmp/source/target",
        "Destination": "/app",
        "Mode": "",
        "RW": true,
        "Propagation": "rprivate"
    }
],

This shows that the mount is a bind mount, it shows the correct source and destination, it shows that the mount is read-write, and that the propagation is set to rprivate.

Stop the container:

$ docker container stop devtest

$ docker container rm devtest


Ideally we should use bind mounts for configs etc. For data persistence, we should use volumes.


Volumes

Created and managed by Docker. You can create a volume explicitly using the  docker volume create command, or Docker can create a volume during container or service creation.

When you create a volume, it is stored within a directory on the Docker host. When you mount the volume into a container, this directory is what is mounted into the container. This is similar to the way that bind mounts work, except that volumes are managed by Docker and are isolated from the core functionality of the host machine.

A given volume can be mounted into multiple containers simultaneously. When no running container is using a volume, the volume is still available to Docker and is not removed automatically. You can remove unused volumes using docker volume prune.

When you mount a volume, it may be named or anonymous. Anonymous volumes are not given an explicit name when they are first mounted into a container, so Docker gives them a random name that is guaranteed to be unique within a given Docker host. Besides the name, named and anonymous volumes behave in the same ways.

Volumes also support the use of volume drivers, which allow you to store your data on remote hosts or cloud providers, among other possibilities.

We have used volumes as we want the data to available for later container runs. Docker manages the volumes in /var/lib/docker/volumes/ using the host filesystem. The path /var/lib/docker/volumes is the logical path on the host filesystems.

Create and Manage Volumes

Unlike a bind mount, you can create and manage volumes outside the scope of any container.

Create a volume:

$ docker volume create my-vol

List volumes:

$ docker volume ls

local               my-vol

Inspect a volume:

$ docker volume inspect my-vol
[
    {
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
        "Name": "my-vol",
        "Options": {},
        "Scope": "local"
    }
]

Remove a volume:

$ docker volume rm my-vol

If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following example mounts the volume myvol2 into /app/  in the container.

 docker run -d \
  --name devtest \
  -v myvol2:/app \
  nginx:latest

Use docker inspect devtest to verify that the volume was created and mounted correctly. Look for the Mounts section:

"Mounts": [
    {
        "Type": "volume",
        "Name": "myvol2",
        "Source": "/var/lib/docker/volumes/myvol2/_data",
        "Destination": "/app",
        "Driver": "local",
        "Mode": "",
        "RW": true,
        "Propagation": ""
    }
],

This shows that the mount is a volume, it shows the correct source and destination, and that the mount is read-write.

Use a volume with docker-compose

A single docker compose service with a volume looks like this:

version: "3.9"
services:
  frontend:
    image: node:lts
    volumes:
      - myapp:/home/node/app
volumes:
  myapp:

On the first invocation of docker-compose up the volume will be created. The same volume will be reused on following invocations.

A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:

version: "3.9"
services:
  frontend:
    image: node:lts
    volumes:
      - myapp:/home/node/app
volumes:
  myapp:
    external: true

Network

When we install docker on the host machine. It creates 3 types of network:

  • bridge
  • host
  • none

$ docker network ls
NETWORK ID     NAME                  DRIVER    SCOPE
ee768c83bcda   bridge                bridge    local
f855c01f90a8   host                  host      local
4d762bc12676   none                  null      local
$

The default   network is listed, along with host and none. The latter two are not fully-fledged networks, but are used to start a container connected directly to the Docker daemon host’s networking stack, or to start a container with no network devices. This tutorial will connect two containers to the bridge network.

We can use default-bridge network or user-defined-bridge network.

Use the default bridge network

We can start one or more docker container by using below command.

$ docker run -dit --name alpine1 alpine ash

$ docker run -dit --name alpine2 alpine ash

This will launch 2 containers and connect them to default-bridge network. The containers will be able ping each other using IP address but will note be able using the name.

Inspect the bridge network to see what containers are connected to it.
$ docker network inspect bridge

[
    {
        "Name": "bridge",
        "Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
        "Created": "2017-06-22T20:27:43.826654485Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Containers": {
            "602dbf1edc81813304b6cf0a647e65333dc6fe6ee6ed572dc0f686a3307c6a2c": {
                "Name": "alpine2",
                "EndpointID": "03b6aafb7ca4d7e531e292901b43719c0e34cc7eef565b38a6bf84acf50f38cd",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "da33b7aa74b0bf3bda3ebd502d404320ca112a268aafe05b4851d1e3312ed168": {
                "Name": "alpine1",
                "EndpointID": "46c044a645d6afc42ddd7857d19e9dcfb89ad790afb5c239a35ac0af5e8a5bc5",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Near the top, information about the bridge network is listed, including the IP address of the gateway between the Docker host and the bridge network (172.17.0.1). Under the Containers key, each connected container is listed, along with information about its IP address (172.17.0.2 for alpine1 and 172.17.0.3 for alpine2).


The containers are running in the background. Use the docker attach command to connect to alpine1.

$ docker attach alpine1

/ #

The prompt changes to # to indicate that you are the root user within the container. Use the ip addr show command to show the network interfaces for alpine1 as they look from within the container:

# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever

The first interface is the loopback device. Ignore it for now. Notice that the second interface has the IP address 172.17.0.2, which is the same address shown for alpine1 in the previous step.

From within alpine1, make sure you can connect to the internet by pinging google.com. The -c 2 flag limits the command to two ping attempts.

# ping -c 2 google.com

PING google.com (172.217.3.174): 56 data bytes
64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms
64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms

--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 9.841/9.869/9.897 ms

Now try to ping the second container. First, ping it by its IP address, 172.17.0.3:

# ping -c 2 172.17.0.3

PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms

--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.086/0.090/0.094 ms

This succeeds. Next, try pinging the alpine2 container by container name. This will fail.

# ping -c 2 alpine2

ping: bad address 'alpine2'
Detach from alpine1 without stopping it by using the detach sequence, CTRL + p CTRL + q (hold down CTRL and type p followed by q). If you wish, attach to alpine2 and repeat steps 4, 5, and 6 there, substituting alpine1 for alpine2.


Stop and remove both containers.

$ docker container stop alpine1 alpine2
$ docker container rm alpine1 alpine2

Remember, the default bridge network is not recommended for production. 

Use user-defined bridge networks

In this example, we again start two alpine containers, but attach them to a user-defined network called alpine-net which we have already created. These containers are not connected to the default bridge network at all. We then start a third alpine container which is connected to the bridge network but not connected to alpine-net, and a fourth alpine container which is connected to both networks.

  1. Create the alpine-net network. You do not need the --driver bridge flag since it’s the default, but this example shows how to specify it.

    $ docker network create --driver bridge alpine-net
    
  2. List Docker’s networks:

    $ docker network ls
    
    NETWORK ID          NAME                DRIVER              SCOPE
    e9261a8c9a19        alpine-net          bridge              local
    17e324f45964        bridge              bridge              local
    6ed54d316334        host                host                local
    7092879f2cc8        none                null                local
    

    Inspect the alpine-net network. This shows you its IP address and the fact that no containers are connected to it:

    $ docker network inspect alpine-net
    
    [
        {
            "Name": "alpine-net",
            "Id": "e9261a8c9a19eabf2bf1488bf5f208b99b1608f330cff585c273d39481c9b0ec",
            "Created": "2017-09-25T21:38:12.620046142Z",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": {},
                "Config": [
                    {
                        "Subnet": "172.18.0.0/16",
                        "Gateway": "172.18.0.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Containers": {},
            "Options": {},
            "Labels": {}
        }
    ]
    

    Notice that this network’s gateway is 172.18.0.1, as opposed to the default bridge network, whose gateway is 172.17.0.1. The exact IP address may be different on your system.

  3. Create your four containers. Notice the --network flags. You can only connect to one network during the docker run command, so you need to use docker network connect afterward to connect alpine4 to the bridge network as well.

    $ docker run -dit --name alpine1 --network alpine-net alpine ash
    
    $ docker run -dit --name alpine2 --network alpine-net alpine ash
    
    $ docker run -dit --name alpine3 alpine ash
    
    $ docker run -dit --name alpine4 --network alpine-net alpine ash
    
    $ docker network connect bridge alpine4
    

    Verify that all containers are running:

    $ docker container ls
    
    CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
    156849ccd902        alpine              "ash"               41 seconds ago       Up 41 seconds                           alpine4
    fa1340b8d83e        alpine              "ash"               51 seconds ago       Up 51 seconds                           alpine3
    a535d969081e        alpine              "ash"               About a minute ago   Up About a minute                       alpine2
    0a02c449a6e9        alpine              "ash"               About a minute ago   Up About a minute                       alpine1
    
  4. Inspect the bridge network and the alpine-net network again:

    $ docker network inspect bridge
    
    [
        {
            "Name": "bridge",
            "Id": "17e324f459648a9baaea32b248d3884da102dde19396c25b30ec800068ce6b10",
            "Created": "2017-06-22T20:27:43.826654485Z",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "172.17.0.0/16",
                        "Gateway": "172.17.0.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Containers": {
                "156849ccd902b812b7d17f05d2d81532ccebe5bf788c9a79de63e12bb92fc621": {
                    "Name": "alpine4",
                    "EndpointID": "7277c5183f0da5148b33d05f329371fce7befc5282d2619cfb23690b2adf467d",
                    "MacAddress": "02:42:ac:11:00:03",
                    "IPv4Address": "172.17.0.3/16",
                    "IPv6Address": ""
                },
                "fa1340b8d83eef5497166951184ad3691eb48678a3664608ec448a687b047c53": {
                    "Name": "alpine3",
                    "EndpointID": "5ae767367dcbebc712c02d49556285e888819d4da6b69d88cd1b0d52a83af95f",
                    "MacAddress": "02:42:ac:11:00:02",
                    "IPv4Address": "172.17.0.2/16",
                    "IPv6Address": ""
                }
            },
            "Options": {
                "com.docker.network.bridge.default_bridge": "true",
                "com.docker.network.bridge.enable_icc": "true",
                "com.docker.network.bridge.enable_ip_masquerade": "true",
                "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
                "com.docker.network.bridge.name": "docker0",
                "com.docker.network.driver.mtu": "1500"
            },
            "Labels": {}
        }
    ]
    

    Containers alpine3 and alpine4 are connected to the bridge network.

    $ docker network inspect alpine-net
    
    [
        {
            "Name": "alpine-net",
            "Id": "e9261a8c9a19eabf2bf1488bf5f208b99b1608f330cff585c273d39481c9b0ec",
            "Created": "2017-09-25T21:38:12.620046142Z",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": {},
                "Config": [
                    {
                        "Subnet": "172.18.0.0/16",
                        "Gateway": "172.18.0.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Containers": {
                "0a02c449a6e9a15113c51ab2681d72749548fb9f78fae4493e3b2e4e74199c4a": {
                    "Name": "alpine1",
                    "EndpointID": "c83621678eff9628f4e2d52baf82c49f974c36c05cba152db4c131e8e7a64673",
                    "MacAddress": "02:42:ac:12:00:02",
                    "IPv4Address": "172.18.0.2/16",
                    "IPv6Address": ""
                },
                "156849ccd902b812b7d17f05d2d81532ccebe5bf788c9a79de63e12bb92fc621": {
                    "Name": "alpine4",
                    "EndpointID": "058bc6a5e9272b532ef9a6ea6d7f3db4c37527ae2625d1cd1421580fd0731954",
                    "MacAddress": "02:42:ac:12:00:04",
                    "IPv4Address": "172.18.0.4/16",
                    "IPv6Address": ""
                },
                "a535d969081e003a149be8917631215616d9401edcb4d35d53f00e75ea1db653": {
                    "Name": "alpine2",
                    "EndpointID": "198f3141ccf2e7dba67bce358d7b71a07c5488e3867d8b7ad55a4c695ebb8740",
                    "MacAddress": "02:42:ac:12:00:03",
                    "IPv4Address": "172.18.0.3/16",
                    "IPv6Address": ""
                }
            },
            "Options": {},
            "Labels": {}
        }
    ]
    

    Containers alpine1alpine2, and alpine4 are connected to the alpine-net network.

  5. On user-defined networks like alpine-net, containers can not only communicate by IP address, but can also resolve a container name to an IP address. This capability is called automatic service discovery. Let’s connect to alpine1 and test this out. alpine1 should be able to resolve alpine2 and alpine4 (and alpine1, itself) to IP addresses.

    $ docker container attach alpine1
    
    # ping -c 2 alpine2
    
    PING alpine2 (172.18.0.3): 56 data bytes
    64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.085 ms
    64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.090 ms
    
    --- alpine2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.085/0.087/0.090 ms
    
    # ping -c 2 alpine4
    
    PING alpine4 (172.18.0.4): 56 data bytes
    64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.076 ms
    64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.091 ms
    
    --- alpine4 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.076/0.083/0.091 ms
    
    # ping -c 2 alpine1
    
    PING alpine1 (172.18.0.2): 56 data bytes
    64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.026 ms
    64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.054 ms
    
    --- alpine1 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.026/0.040/0.054 ms
    
  6. From alpine1, you should not be able to connect to alpine3 at all, since it is not on the alpine-net network.

    # ping -c 2 alpine3
    
    ping: bad address 'alpine3'
    

    Not only that, but you can’t connect to alpine3 from alpine1 by its IP address either. Look back at the docker network inspect output for the bridge network and find alpine3’s IP address: 172.17.0.2 Try to ping it.

    # ping -c 2 172.17.0.2
    
    PING 172.17.0.2 (172.17.0.2): 56 data bytes
    
    --- 172.17.0.2 ping statistics ---
    2 packets transmitted, 0 packets received, 100% packet loss
    

    Detach from alpine1 using detach sequence, CTRL + p CTRL + q (hold down CTRL and type p followed by q).

  7. Remember that alpine4 is connected to both the default bridge network and alpine-net. It should be able to reach all of the other containers. However, you will need to address alpine3 by its IP address. Attach to it and run the tests.

    $ docker container attach alpine4
    
    # ping -c 2 alpine1
    
    PING alpine1 (172.18.0.2): 56 data bytes
    64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.074 ms
    64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.082 ms
    
    --- alpine1 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.074/0.078/0.082 ms
    
    # ping -c 2 alpine2
    
    PING alpine2 (172.18.0.3): 56 data bytes
    64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.075 ms
    64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms
    
    --- alpine2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.075/0.077/0.080 ms
    
    # ping -c 2 alpine3
    ping: bad address 'alpine3'
    
    # ping -c 2 172.17.0.2
    
    PING 172.17.0.2 (172.17.0.2): 56 data bytes
    64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.089 ms
    64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
    
    --- 172.17.0.2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.075/0.082/0.089 ms
    
    # ping -c 2 alpine4
    
    PING alpine4 (172.18.0.4): 56 data bytes
    64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.033 ms
    64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.064 ms
    
    --- alpine4 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.033/0.048/0.064 ms
    
  8. As a final test, make sure your containers can all connect to the internet by pinging google.com. You are already attached to alpine4 so start by trying from there. Next, detach from alpine4 and connect to alpine3 (which is only attached to the bridge network) and try again. Finally, connect to alpine1 (which is only connected to the alpine-net network) and try again.

    # ping -c 2 google.com
    
    PING google.com (172.217.3.174): 56 data bytes
    64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.778 ms
    64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.634 ms
    
    --- google.com ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 9.634/9.706/9.778 ms
    
    CTRL+p CTRL+q
    
    $ docker container attach alpine3
    
    # ping -c 2 google.com
    
    PING google.com (172.217.3.174): 56 data bytes
    64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.706 ms
    64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.851 ms
    
    --- google.com ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 9.706/9.778/9.851 ms
    
    CTRL+p CTRL+q
    
    $ docker container attach alpine1
    
    # ping -c 2 google.com
    
    PING google.com (172.217.3.174): 56 data bytes
    64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.606 ms
    64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.603 ms
    
    --- google.com ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 9.603/9.604/9.606 ms
    
    CTRL+p CTRL+q
    
  9. Stop and remove all containers and the alpine-net network.

    $ docker container stop alpine1 alpine2 alpine3 alpine4
    
    $ docker container rm alpine1 alpine2 alpine3 alpine4
    
    $ docker network rm alpine-net
    


Grafana and Postgres

The docker-compose.yml is below:

version: '3.9'

services:
  grafana:
    image: grafana/grafana
    container_name: grafana
    ports:
      - 3000:3000
    links:
      - postgres
    volumes:
      - grafana_data:/var/lib/grafana:rw
    networks:
      - gf_net
    
  postgres:
    image: postgres:9.6.6
    container_name: postgres
    environment:
      POSTGRES_USER: postgres     # define credentials
      POSTGRES_PASSWORD: postgres # define credentials
      POSTGRES_DB: postgres       # define database
    ports:
      - 5432:5432                 # Postgres port
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - gf_net

volumes:
  grafana_data:
  postgres_data:

networks:
  gf_net:
    driver: bridge    

The command to start the docker container for Grafana and Postgres is

$ docker-compose up -d

The network, Grafana and Postgres container will be started and available for us. In this case, grafana_data and postgres_data will be the name inside the docker-compose.yml file; for the 
real volume name, it will be prepended with project name prefix.


$ docker volume ls 
local     grafana_grafana_data
local     grafana_postgres_data

This created 2 grafana_grafana_data and grafana_postgres_data volumes. The volumes are not externally created but created by docker for us.

$ docker volume inspect grafana_grafana_data
[
    {
        "CreatedAt": "2021-09-26T02:32:16Z",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "grafana",
            "com.docker.compose.version": "1.29.2",
            "com.docker.compose.volume": "grafana_data"
        },
        "Mountpoint": "/var/lib/docker/volumes/grafana_grafana_data/_data",
        "Name": "grafana_grafana_data",
        "Options": null,
        "Scope": "local"
    }
]

The other volume.
$ docker volume inspect grafana_postgres_data
[
    {
        "CreatedAt": "2021-09-26T02:27:19Z",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "grafana",
            "com.docker.compose.version": "1.29.2",
            "com.docker.compose.volume": "postgres_data"
        },
        "Mountpoint": "/var/lib/docker/volumes/grafana_postgres_data/_data",
        "Name": "grafana_postgres_data",
        "Options": null,
        "Scope": "local"
    }
]

The details of the network.

$ docker network ls
NETWORK ID     NAME             DRIVER    SCOPE
ee768c83bcda   bridge           bridge    local
7a112508d3bb   grafana_gf_net   bridge    local
f855c01f90a8   host             host      local
4d762bc12676   none             null      local

Below are the details of the network.

$ docker network inspect grafana_gf_net
[
    {
        "Name": "grafana_gf_net",
        "Id": "7a112508d3bb08a69e95be809ce55335d42afb55edf305c3c458f5e53eadd796",
        "Created": "2021-09-26T02:27:13.3604873Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.22.0.0/16",
                    "Gateway": "172.22.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "2b55c5da9e850a1d47c169fa1e4b68cba516234aceac32a56d8b0dc2a993fc08": {
                "Name": "grafana",
                "EndpointID": "826cbfbaa52eeac91bd372ea490d8b81f594ad07b4d61bb8f0956cda0e4f2949",
                "MacAddress": "02:42:ac:16:00:03",
                "IPv4Address": "172.22.0.3/16",
                "IPv6Address": ""
            },
            "944f260241be8ead1777315971936db41ef1fe4fdd41d33c3396eea84b03a28f": {
                "Name": "postgres",
                "EndpointID": "d9fe759283bbf415c91285cf2b845cd6bb88216090dc120efa80d706be6b2bd6",
                "MacAddress": "02:42:ac:16:00:02",
                "IPv4Address": "172.22.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "gf_net",
            "com.docker.compose.project": "grafana",
            "com.docker.compose.version": "1.29.2"
        }
    }
]

The below are the docker containers.

$ docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED              STATUS              PORTS                                       NAMES
2b55c5da9e85   grafana/grafana   "/run.sh"                About a minute ago   Up About a minute   0.0.0.0:3000->3000/tcp, :::3000->3000/tcp   grafana
944f260241be   postgres:9.6.6    "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:5432->5432/tcp, :::5432->5432/tcp   postgres

The grafana dashboard can be access on http://localhost:3000/ in your browser.





The datasource can be added using the postgres name.



The connection is successful. 

Happy Coding.