Friday, October 22, 2021

Run Redis Cluster on Docker

Introduction 

I was recently asked to implement Redis-Cluster  This cluster will spawn 3 master and 3 slave nodes.

Cluster

I have used the docker-compose.yml from Bitnami:
version: '2'
services:
  r01:
    image: docker.io/bitnami/redis-cluster:6.2
    hostname: r01
    container_name: r01
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
      - REDIS_NODES=r01 r02 r03 r04 r05 r06
    networks:
      - redis_net

  r02:
    image: docker.io/bitnami/redis-cluster:6.2
    hostname: r02
    container_name: r02
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
      - REDIS_NODES=r01 r02 r03 r04 r05 r06
    networks:
      - redis_net

  r03:
    image: docker.io/bitnami/redis-cluster:6.2
    hostname: r03
    container_name: r03
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
      - REDIS_NODES=r01 r02 r03 r04 r05 r06
    networks:
      - redis_net

  r04:
    image: docker.io/bitnami/redis-cluster:6.2
    hostname: r04
    container_name: r04
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
      - REDIS_NODES=r01 r02 r03 r04 r05 r06
    networks:
      - redis_net

  r05:
    image: docker.io/bitnami/redis-cluster:6.2
    hostname: r05
    container_name: r05
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
      - REDIS_NODES=r01 r02 r03 r04 r05 r06
    networks:
      - redis_net

  r06:
    image: docker.io/bitnami/redis-cluster:6.2
    hostname: r06
    container_name: r06
    depends_on:
      - r01
      - r02
      - r03
      - r04
      - r05
    environment:
      - ALLOW_EMPTY_PASSWORD=yes
      - REDIS_NODES=r01 r02 r03 r04 r05 r06
      - REDIS_CLUSTER_REPLICAS=1
      - REDIS_CLUSTER_CREATOR=yes
    networks:
      - redis_net


networks:
  redis_net:
    driver: bridge
This will launch 6 containers:

$ docker ps
CONTAINER ID   IMAGE                       COMMAND                  CREATED          STATUS          PORTS      NAMES
4a2d11bd1275   bitnami/redis-cluster:6.2   "/opt/bitnami/script…"   19 seconds ago   Up 18 seconds   6379/tcp   r06
702d098a2c9c   bitnami/redis-cluster:6.2   "/opt/bitnami/script…"   20 seconds ago   Up 19 seconds   6379/tcp   r03
878870c4e8e6   bitnami/redis-cluster:6.2   "/opt/bitnami/script…"   20 seconds ago   Up 18 seconds   6379/tcp   r02
a8d0d5302e1c   bitnami/redis-cluster:6.2   "/opt/bitnami/script…"   20 seconds ago   Up 19 seconds   6379/tcp   r01
9dd772da1aee   bitnami/redis-cluster:6.2   "/opt/bitnami/script…"   20 seconds ago   Up 18 seconds   6379/tcp   r04
263a280f180a   bitnami/redis-cluster:6.2   "/opt/bitnami/script…"   20 seconds ago   Up 18 seconds   6379/tcp   r05
We can now login to one of the containers:

$ redis-cluster % docker exec -it r01 /bin/bash
I have no name!@r01:/$
You can test the cluster settings:

$ redis-cli
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:52
cluster_stats_messages_pong_sent:55
cluster_stats_messages_sent:107
cluster_stats_messages_ping_received:50
cluster_stats_messages_pong_received:52
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:107
You can check the cluster nodes status:
 
$ cluster nodes
535a218f52e42503bfe306c82dc6308b1bdb0cd5 172.21.0.3:6379@16379 myself,master - 0 1634935181000 3 connected 10923-16383
3dec2af9999f386b0212104f08badd3124a43535 172.21.0.4:6379@16379 slave 535a218f52e42503bfe306c82dc6308b1bdb0cd5 0 1634935181702 3 connected
1b554ca155d2008b319b0f3117e970a446169977 172.21.0.5:6379@16379 slave b02f0d075b12ac25c16230b0ce14f9482ec251e1 0 1634935180692 1 connected
145398bae4b7347c382fb8417fb67d35f0e77715 172.21.0.7:6379@16379 slave 9932d062c2f355aab0565758b0d686b92a20c8f9 0 1634935180000 2 connected
b02f0d075b12ac25c16230b0ce14f9482ec251e1 172.21.0.2:6379@16379 master - 0 1634935179682 1 connected 0-5460
9932d062c2f355aab0565758b0d686b92a20c8f9 172.21.0.6:6379@16379 master - 0 1634935182677 2 connected 5461-10922
You can run some set and get:

172.21.0.3:6379> set hi "hello"
-> Redirected to slot [16140] located at 172.21.0.3:6379
OK
172.21.0.3:6379> get hi
"hello"
172.21.0.3:6379>
We can test the same on other nodes as well to test if the cluster is working by  et hi  on nodes r2-r6.

Happy Coding !!!

Monday, October 11, 2021

Nifi Multi Node Cluster on Docker

Introduction

I will run Apache Nifi multi node cluster on docker. I will use external Zookeeper for it.

Multi Node Cluster

I will use docker-compose to launch a 3 node Apache Nifi cluster. There will be a separate node for zookeeper.

I am using custom Docker file so that I don't download the 1.5GB+ zip file multiple times and slow down the container launch. I have manually downloaded Nifi and Nifi-toolkit zip file for 1.14.0 version.

FROM openjdk:8-jre

ARG UID=1000
ARG GID=1000
ARG NIFI_VERSION=1.14.0

ENV NIFI_BASE_DIR=/opt/nifi
ENV NIFI_HOME ${NIFI_BASE_DIR}/nifi-current
ENV NIFI_TOOLKIT_HOME ${NIFI_BASE_DIR}/nifi-toolkit-current

ENV NIFI_PID_DIR=${NIFI_HOME}/run
ENV NIFI_LOG_DIR=${NIFI_HOME}/logs

ADD sh/ ${NIFI_BASE_DIR}/scripts/
RUN chmod -R +x ${NIFI_BASE_DIR}/scripts/*.sh

# Setup NiFi user and create necessary directories
RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \
    && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \
    && mkdir -p ${NIFI_BASE_DIR} \
    && chown -R nifi:nifi ${NIFI_BASE_DIR} \
    && apt-get update \
    && apt-get install -y jq xmlstarlet procps nano vim iputils-ping

USER nifi

# Download, validate, and expand Apache NiFi Toolkit binary.
ADD nifi-toolkit-${NIFI_VERSION}-bin.zip ${NIFI_BASE_DIR}/
RUN unzip ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
    && rm ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}-bin.zip \
    && mv ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION} ${NIFI_TOOLKIT_HOME} \
    && ln -s ${NIFI_TOOLKIT_HOME} ${NIFI_BASE_DIR}/nifi-toolkit-${NIFI_VERSION}

# Download, validate, and expand Apache NiFi binary.
ADD nifi-${NIFI_VERSION}-bin.zip ${NIFI_BASE_DIR}/
RUN unzip ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip -d ${NIFI_BASE_DIR} \
    && rm ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.zip \
    && mv ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} ${NIFI_HOME} \
    && mkdir -p ${NIFI_HOME}/conf \
    && mkdir -p ${NIFI_HOME}/database_repository \
    && mkdir -p ${NIFI_HOME}/flowfile_repository \
    && mkdir -p ${NIFI_HOME}/content_repository \
    && mkdir -p ${NIFI_HOME}/provenance_repository \
    && mkdir -p ${NIFI_HOME}/state \
    && mkdir -p ${NIFI_LOG_DIR} \
    && ln -s ${NIFI_HOME} ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}

VOLUME ${NIFI_LOG_DIR} \
       ${NIFI_HOME}/conf \
       ${NIFI_HOME}/database_repository \
       ${NIFI_HOME}/flowfile_repository \
       ${NIFI_HOME}/content_repository \
       ${NIFI_HOME}/provenance_repository \
       ${NIFI_HOME}/state

# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile
RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh

# Web HTTP(s) & Socket Site-to-Site Ports
EXPOSE 8080 8443 10000 8000

WORKDIR ${NIFI_HOME}

ENTRYPOINT ["../scripts/start.sh"]
You can create a local image using below command: 

$  docker build -t my_nifi -f Dockerfile_manual .
This will generate the image in few seconds.


Th Let's now create a  docker-compose : 

version: "3"
services:
  zk01:
    hostname: zk01
    container_name: zk01
    image: 'bitnami/zookeeper:3.7'
    ports:
      - '2181'
      - '2888'
      - '3888'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
      - ZOO_SERVER_ID=1
      - ZOO_SERVERS=0.0.0.0:2888:3888
    networks:
      - nifinet

  nifi01:
    image: my_nifi:latest
    container_name: nifi01
    hostname: nifi01
    ports:
      - 6980:8080
    volumes:
      - /Users/aagarwal/dev/docker/java_cluster/nifi_conf1:/opt/nifi/nifi-current/conf
    networks:
      - nifinet
    environment:
      - NIFI_WEB_HTTP_PORT=8080
      - NIFI_CLUSTER_IS_NODE=true
      - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
      - NIFI_ZK_CONNECT_STRING=zk01:2181
      - NIFI_ELECTION_MAX_WAIT=1 min
      - NIFI_SENSITIVE_PROPS_KEY=testpassword

  nifi02:
    image: my_nifi:latest
    container_name: nifi02
    hostname: nifi02
    ports:
      - 6979:8080
    volumes:
      - /Users/aagarwal/dev/docker/java_cluster/nifi_conf2:/opt/nifi/nifi-current/conf
    networks:
      - nifinet
    environment:
      - NIFI_WEB_HTTP_PORT=8080
      - NIFI_CLUSTER_IS_NODE=true
      - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
      - NIFI_ZK_CONNECT_STRING=zk01:2181
      - NIFI_ELECTION_MAX_WAIT=1 min
      - NIFI_SENSITIVE_PROPS_KEY=testpassword

  nifi03:
    image: my_nifi:latest
    container_name: nifi03
    hostname: nifi03
    ports:
      - 6978:8080
    volumes:
      - /Users/aagarwal/dev/docker/java_cluster/nifi_conf3:/opt/nifi/nifi-current/conf
    networks:
      - nifinet
    environment:
      - NIFI_WEB_HTTP_PORT=8080
      - NIFI_CLUSTER_IS_NODE=true
      - NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
      - NIFI_ZK_CONNECT_STRING=zk01:2181
      - NIFI_ELECTION_MAX_WAIT=1 min
      - NIFI_SENSITIVE_PROPS_KEY=testpassword

networks:
  nifinet:
    driver: bridge

Use below command to create the cluster:

$  docker-compose -f docker-compose.yaml up
This will launch the 1 node for zookeeper and 3 nodes for Nifi.


This will take 10-15 minutes for the Nifi nodes for them to be available.

Open the browser  http://localhost:6979/nifi :

The nifi is ready for use.

Happy Coding !!!

Running Multi Node Zookeeper Cluster on Docker

Introduction 

I am working on installing Apache Nifi on multi node cluster. For this I needed a Zookeeper multi node cluster. 

Zookeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. 

Multi Node Cluster

We will run 3 node cluster on docker. We will use below docker-compose.yaml as below:

version: "3"
services:
  zk01:
    hostname: zk01
    container_name: zk01
    image: 'bitnami/zookeeper:3.7'
    ports:
      - '2181'
      - '2888'
      - '3888'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
      - ZOO_SERVER_ID=1
      - ZOO_SERVERS=0.0.0.0:2888:3888,zk02:2888:3888,zk03:2888:3888
    networks:
      - zk_net

  zk02:
    hostname: zk02
    container_name: zk02
    image: 'bitnami/zookeeper:3.7'
    ports:
      - '2181'
      - '2888'
      - '3888'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
      - ZOO_SERVER_ID=2
      - ZOO_SERVERS=zk01:2888:3888,0.0.0.0:2888:3888,zk03:2888:3888
    networks:
      - zk_net

networks:
  zk_net:
    driver: bridge

I am using bitnami-zookeeper image for it. You can use  latest version instead of 3.7 version if interested. 

This will launch a 3 node zookeeper cluster without publishing/exposing the nodes to the host machines. 

$  docker-compose -f docker-compose.yaml up
This will show the logs of all the 3 nodes as they start running.



After few minutes the log will be like below:



We can check the status by running below command:

$  docker ps
This will show all the 3 instances running.


You can now login to one of the instance/node using below command.

$ docker exec -it zk02 /bin/bash
I have no name!@zk02:/$
This will log you in to the zk02 node.

Testing the Cluster

We will now create some entry on zk02 and it will be reflected/replicated to all the other Zookeeper nodes immediately. We will start zookeeper cli for it.

$ zkCli.sh -server zk02:2181
/opt/bitnami/java/bin/java
Connecting to zk02:2181
2021-10-12 05:56:18,058 [myid:] - INFO  [main:Environment@98] - Client environment:zookeeper.version=3.7.0-e3704b390a6697bfdf4b0bef79e3da7a4f6bac4b, built on 2021-03-17 09:46 UTC
2021-10-12 05:56:18,064 [myid:] - INFO  [main:Environment@98] - Client environment:host.name=zk02
2021-10-12 05:56:18,065 [myid:] - INFO  [main:Environment@98] - Client environment:java.version=11.0.12
2021-10-12 05:56:18,069 [myid:] - INFO  [main:Environment@98] - Client environment:java.vendor=BellSoft

This will open below zk cli command prompt: 

2021-10-12 05:56:18,195 [myid:zk02:2181] - INFO  [main-SendThread(zk02:2181):ClientCnxn$SendThread@1005] - Socket connection established, initiating session, client: /172.28.0.2:43996, server: zk02/172.28.0.2:2181
2021-10-12 05:56:18,249 [myid:zk02:2181] - INFO  [main-SendThread(zk02:2181):ClientCnxn$SendThread@1438] - Session establishment complete on server zk02/172.28.0.2:2181, session id = 0x20010a8acf10000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: zk02:2181(CONNECTED) 0] 
Now run create : 
[zk: zk02:2181(CONNECTED) 0] create /hello world
Created /hello
We can query it as below
[zk: zk02:2181(CONNECTED) 1] get /hello
world
We can also check by logging into other nodes. We will follow the above steps to login and query on other node.

Clean up

We can delete the entry:
[zk: zk02:2181(CONNECTED) 2] delete /hello

Happy Coding !!

Saturday, October 9, 2021

Expose Pod in Kubernetes

Introduction 

Today we will learn to expose the Pod Resource of Kubernetes.

Expose Pod

We will create and expose the pod using below yaml:

apiVersion: v1
kind: Pod
metadata:
  name: ashok-pod
  labels:
    app: web
spec:
  containers:
    - name: hello-app
      image: gcr.io/google-samples/hello-app:1.0
      ports:
      - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: ashok-svc
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: web

We will create the pod and service using:

$ kubectl apply -f pod2.yaml
pod/ashok-pod created
service/ashok-svc created
We can status using below:

$ kubectl get all
NAME            READY   STATUS    RESTARTS   AGE
pod/ashok-pod   1/1     Running   0          92s

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
service/ashok-svc    LoadBalancer   10.108.156.134   <pending>     8080:30901/TCP   92s
service/kubernetes   ClusterIP      10.96.0.1        <none>        443/TCP          3d2h

Create a tunnel to access service in browser:
$ minikube tunnel
🏃  Starting tunnel for service ashok-svc.
Open the browser  http://localhost:8080/ :



Happy Coding !!

Running StorageClass Resource on Kubernetes

Introduction

We will learn creating dynamic volume using StorageClass on Minikube today. 

StorageClass

We will create StorageClass, PersistentVolumeClaim, Pod and Service using yaml:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: slow
provisioner: k8s.io/minikube-hostpath
parameters:
  type: pd-ssd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ashok-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: slow
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: ashok-pv-pod
  labels:
    app: hello  
spec:
  volumes:
    - name: ashok-pv-storage
      persistentVolumeClaim:
        claimName: ashok-pv-claim
  containers:
    - name: ashok-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: ashok-pv-storage
---
apiVersion: v1
kind: Service
metadata:
  name: helloweb-svc
  labels:
    app: hello
spec:
  type: LoadBalancer
  ports:
  - port: 8080
    targetPort: 80
  selector:
    app: hello

We need to create an  index.html file for our nignx pod and then use this file in our PersistentVolumeClaim for storage in our Pod.

You can login to Pod using:

$ kubectl exec -it ashok-pv-pod -- /bin/bash
root@ashok-pv-pod:/#
Create index.html file at /usr/share/nginx/html/index.html :

$ echo 'Hello from Kubernetes storage using storage class' > /usr/share/nginx/html/index.html
Create a tunnel to access service in browser:
$ minikube tunnel
🏃  Starting tunnel for service helloweb-svc.
Open the browser  http://localhost:8080/ :



Happy Coding !!


Pass configs to Kubernetes using ConfigMap

Introduction

Today we will learn about ConfigMap Resource of Kubernetes. This is used to pass the configs to Kubernetes resources like Pod, Deployment etc.

ConfigMap

We will create a ConfigMap using yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: special-config
  namespace: default
data:
  special.how: very
  log_level: INFO
  SPECIAL_LEVEL: very
  SPECIAL_TYPE: charm
  example.property.file: |-
    property.1=value-1
    property.2=value-2
    property.3=value-3    

We will create the config from above yaml:

$ kubectl apply -f config.yaml
configmap/special-config created
We will use this config in below pod:
apiVersion: v1
kind: Pod
metadata:
  name: ashok-config-pod
spec:
  containers:
    - name: ashok-config-pod
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "env" ]
      env:
        - name: SPECIAL_LEVEL_KEY
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: special.how
        - name: LOG_LEVEL
          valueFrom:
            configMapKeyRef:
              name: special-config
              key: log_level
  restartPolicy: Never
Let's apply above yaml:
$ kubectl apply -f pod.yaml
pod/ashok-config-pod created
We can check the status of the pod:
$ kubectl get po
NAME               READY   STATUS      RESTARTS   AGE
ashok-config-pod   0/1     Completed   0          48s
We can see the status is  completed for our pod. To check our configs passed as environment variables will be visible in the logs.
$ kubectl logs ashok-config-pod
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
LOG_LEVEL=INFO                         <--- Our config from the ConfigMap Resource
HOSTNAME=ashok-config-pod
SHLVL=1
HOME=/root
ASHOK_SVC_SERVICE_HOST=10.106.81.159 
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
ASHOK_SVC_PORT=tcp://10.106.81.159:80
ASHOK_SVC_SERVICE_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
ASHOK_SVC_PORT_80_TCP_ADDR=10.106.81.159
SPECIAL_LEVEL_KEY=very                 <--- Our config from the ConfigMap Resource
ASHOK_SVC_PORT_80_TCP_PORT=80
ASHOK_SVC_PORT_80_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
ASHOK_SVC_PORT_80_TCP=tcp://10.106.81.159:80
We will use another pod yaml to see the example.property.file :
apiVersion: v1
kind: Pod
metadata:
  name: ashok-config-pod
spec:
  containers:
    - name: ashok-config-pod
      image: k8s.gcr.io/busybox
      command: [ "/bin/sh", "-c", "ls /etc/config/" ]
      volumeMounts:
      - name: config-volume
        mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
        # Provide the name of the ConfigMap containing the files you want
        # to add to the container
        name: special-config
  restartPolicy: Never
Similarly you can check the configs in logs.

Happy Coding !!

Create PersistentVolume, PersistentVolume on Kubernetes

Introduction

I will show how to create PersistentVolume and PersistentVolumeClaim on Kubernetes. I am using Minikube for our learning.

We will first create an  index.html file for our Nginx pod and then use this file in our PersistentVolume and PersistentVolumeClaim for storage in our Pod.

Storage

You can login to Minikube using:

$ minikube ssh
Last login: Sun Oct 10 01:14:15 2021 from 192.168.49.1
docker@minikube:~$
We will now create index.html file at /mnt/data/index.html :
$ sudo mkdir /mnt/data
$ sudo sh -c "echo 'Hello from PVC example' > /mnt/data/index.html"

PersistentVolume

We can create a PersistentVolume using yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ashok-pv-vol
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Apply the PV:

$ kubectl apply -f pv.yaml
persistentvolume/ashok-pv-vol created

We can check the status of the PersistentVolume:

$ kubectl get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
ashok-pv-vol   20Gi       RWO            Retain           Available           manual                  4s

PersistentVolumeClaim

We will create an PersistentVolumeClaim using yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ashok-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

PersistentVolumeClaim can be created as below:

$ kubectl apply -f pvc.yaml
persistentvolumeclaim/ashok-pv-claim created

We can check the status of the PersistentVolumeClaim:

$ kubectl get pvc
NAME             STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ashok-pv-claim   Bound    ashok-pv-vol   20Gi       RWO            manual         8s

The persistent volume status is now changed to bound :

$ kubectl get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
ashok-pv-vol   20Gi       RWO            Retain           Bound    default/ashok-pv-claim   manual                  2m31s

Let's create a pod for using the static volume.

Using PVC in Pod


We will create a pod using yaml:
apiVersion: v1
kind: Pod
metadata:
  name: ashok-pv-pod
spec:
  volumes:
    - name: ashok-pv-storage
      persistentVolumeClaim:
        claimName: ashok-pv-claim
  containers:
    - name: ashok-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: ashok-pv-storage

Let's create pod:
$ kubectl apply -f pv-pod.yaml
pod/ashok-pv-pod created
We will check the status of the pod:
$ kubectl get po
NAME           READY   STATUS    RESTARTS   AGE
ashok-pv-pod   1/1     Running   0          79s
To make this pod accessible from browser, you can use  kubectl port-forward :
$ kubectl port-forward nginx 8888:80
Forwarding from 127.0.0.1:8888 -> 80
Forwarding from [::1]:8888 -> 80

Open the browser  http://localhost:8888/ :

Clean Up

We can delete the pod, pvc and pv using below commands:

$ kubectl delete po ashok-pv-pod
pod "ashok-pv-pod" deleted
$ kubectl delete pvc ashok-pv-claim
persistentvolumeclaim "ashok-pv-claim" deleted
$ kubectl delete pv ashok-pv-vol
persistentvolume "ashok-pv-vol" deleted

Let me know if you need any help.

Happy Coding !!!