Sunday, April 7, 2024

Kubernetes-Services

  An abstract way to expose an application running on a set of Pods as a network service.

With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them

CLUSTER IP

ClusterIP is the default kubernetes service. This service is created inside a cluster and can only be accessed by other pods in that cluster. So basically we use this type of service when we want to expose a service to other pods within the same cluster.

Nodeport:

NodePort opens a specific port on your node/VM and when that port gets traffic, that traffic is forwarded directly to the service.

There are a few limitations and hence its not advised to use NodePort

- only one service per port

- You can only use ports 30000-32767


LoadBalancer:

This is the standard way to expose service to the internet. All the traffic on the port is forwarded to the service. It's designed to assign an external IP to act as a load balancer for the service.  There's no filtering, no routing. LoadBalancer uses cloud service

Few limitations with LoadBalancer:

- every service exposed will it's own ip address 

- It gets very expensive 


1. Deployment & NodePort service manifest file


Deployment YAML file:

~~~~~~~~~~~~~~~~~~~~~

# Deployment
# nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.7.9
        ports:
        - containerPort: 80


--------------------------------------


NodePort Service YAML file:

~~~~~~~~~~~~~~~~~~~~~~~~~~

# Service
# nginx-svc-np.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: nginx-app
spec:
  selector:
    app: nginx-app
  type: NodePort
  ports:
  - nodePort: 31111
    port: 80
    targetPort: 80


*******************************************************************

2. Create and Display Deployment and NodePort


kubectl create –f nginx-deploy.yaml

kubectl create -f nginx-svc.yaml

kubectl get service -l app=nginx-app

kubectl get po -o wide

kubectl describe svc my-service


*******************************************************************

3. Testing


# To get inside the pod

kubectl exec [POD-IP] -it /bin/sh


# Create test HTML page

cat <<EOF > /usr/share/nginx/html/test.html

<!DOCTYPE html>

<html>

<head>

<title>Testing..</title>

</head>

<body>

<h1 style="color:rgb(90,70,250);">Hello, NodePort Service...!</h1>

<h2>Congratulations, you passed :-) </h2>

</body>

</html>

EOF

exit


Test using Pod IP:

~~~~~~~~~~~~~~~~~~~~~~~

kubectl get po -o wide

curl http://[POD-IP]/test.html

NodePort  –  Accessing using Service IP


Test using Service IP:

~~~~~~~~~~~~~~~~~~~~~~~~~~~

kubectl get svc -l app=nginx-app

curl http://[cluster-ip]/test.html


Test using Node IP (external IP)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

http://nodep-ip:nodePort/test.html

note: node-ip is the external ip address of a node.


*******************************************************************

4. Cleanup


kubectl delete -f nginx-deploy.yaml

kubectl delete -f nginx-svc.yaml

kubectl get deploy

kubectl get svc

kubectl get pods


*******************************************************************


LoadBalancer


# 1. YAML: Deployment & Load Balancer Service


# Deployment
# controllers/nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-app
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.7.9
        ports:
        - containerPort: 80


------------------------------------


# Service - LoadBalancer
#lb.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: nginx-app
spec:
  selector:
    app: nginx-app
  type: LoadBalancer
  ports:
  - nodePort: 31000
    port: 80
    targetPort: 80


*******************************************************************

2. Create & Display: Deployment & Load Balancer Service


kubectl create –f nginx-deploy.yaml

kubectl create -f lb.yaml

kubectl get pod -l app=nginx-app

kubectl get deploy -l app=nginx-app 

kubectl get service -l app=nginx-app

kubectl describe service my-service


*******************************************************************

3. Testing Load Balancer Service


# To get inside the pod

kubectl exec -it [pod-name] -- /bin/sh


# Create test HTML page

cat <<EOF > /usr/share/nginx/html/test.html

<!DOCTYPE html>

<html>

<head>

<title>Testing..</title>

</head>

<body>

<h1 style="color:rgb(90,70,250);">Hello, Kubernetes...!</h1>

<h2>Load Balancer is working successfully. Congratulations, you passed :-) </h2>

</body>

</html>

EOF

exit


# Test using load-balancer-ip

http://load-balancer-ip

http://load-balancer-ip/test.html


# Testing using nodePort

http://nodeip:nodeport

http://nodeip:nodeport/test.html



*******************************************************************

4. Cleanup


kubectl delete –f nginx-deploy.yaml

kubectl delete -f lb.yaml

kubectl get pod 

kubectl get deploy 

kubectl get service 


Ingress

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

    internet
        |
   [ Ingress ]
   --|-----|--
   [ Services ]

An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.


kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/baremetal/deploy.yaml
kubectl get svc --all-namespaces
kubectl run nginx --image=nginx
kubectl expose pod nginx --type=ClusterIP --port=80

#ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
   nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
     paths:
     - path: /nginx
       backend:
        serviceName: nginx
        servicePort: 80

No comments:

Post a Comment

Different Types of Reports in Scrum - Agile

  Agile Reporting 1. Sprint Burndown At a Sprint-level, the burndown presents the  easiest way to track and report status  (the proverbial  ...