Code Review
- Download Kubernetes Learning Kit
- Unzip on your
HashiCorp
folder. - Use
~\HashiCorp\_Lecture_k8s_learning.kit-main\ch1\1.5\k8s-min-5GiB-wo-add-nodes
and~\HashiCorp\_Lecture_k8s_learning.kit-main\ch1\1.6
to see all settings files.
Vagrantfile
- Vagrantfile is made by Ruby.
- Worker Nodes are three.
- Versions for Kubernetes, Docker and ContainerD
Master Node
- Current Vagrantfile Api Version is 2.
do-end
statement will run line 19~35.
k8s_V
is a variable for kubernetes version and0..3
means the index to read.- For example,
k8s_V
is 1.20.0 andk8s_V[0..3]
will get1.20
.
- VM image is CentOS private image.
- Our virtual machine software is virtual box.
- Master Node has 2 cpus.
- Master Node has 1746 MB.
- We made a group to set on/off easy.
- If you want to see your customized groups, enter
kubectl get cm kubelet-config-1.22 -n kube-system -o yaml | grep cgroup
.
- Host name is
m-k8s
. private_network
makes connection network with your pc.forwarded_port
set local host to60010
.- When there is something wrong in our port, it will fix automatically.
- Port ID is
ssh
. synced_folder
will sync your pc folders with virtual machine folders, ifdisabled
is false.
- Run
xxx.sh
file in shell with arguments. - We seperated cluster part for practice.
Worker Node
- Loop statement 1 to 3.
- Worker Node has 1 cpu.
k8s_env_build.sh
- This file will set kubernetes environment.
- This script will use bash shell script.
vi
command is same withvim
- Swap should be off to install kubernetes.
- Swap is always off when you reboot the machine.
gpgcheck
is off.repo_gpgcheck
is off.- If you need security, you can set those to 1.
- Use
yum
to download docker community repository.
- SELinux if off.
- If you need security, you can set this to on.
br_netfilter
is bridge netfileter.br_netfilter
will connect your machines to one network.
- Deploy node name automatically, for example, m-k8s or w1-k8s.
$1
isk8s_v
.
- DNS settings.
k8s_env_build.sh
- This script is for installing kubernetes.
epel-release
makes extended package for CentOs from Red Hat, examples, extended storage.vim-enhanced
will install vim.- You don’t need to intall
git
but for practice.
$2
isdocker_V
and$3
isctrd_V
.
$1
isk8s_V
.
- Ready to system.
k_cfg_n_git_clone.sh
bash-completion
allow us to use auto kubectl commands.
alias
makes short cut of commands.complete -F __start_kubectl k
allowk
to usebash-completion
- Download practice codes from git.
WO_master_node.sh
- We need
token
to join Master Node and Worker Node. token-ttl
expires the token in 24 hours.pod-network-cidr
assigns pod’s network.apiserver-advertise-address
is fixed with Master Node IP address to avoid join problems.
- To skip verification when we use kubectl.
- Apply Calico for kubernetes network.
WO_work_nodes.sh
- Join with Master Node.
IDE
Deploy Kubernetes VM
- Open your
~HashiCorp\_Lecture_k8s_learning.kit-main\ch1\1.5\k8s-min-5GiB-wo-add-nodes
folder in command and usevagrant up
.
- Open your SuperPutty and click [File]-[Import Sessions]-[From File] to import
~HashiCorp\_Lecture_k8s_learning.kit-main\ch1\1.5\Sessions(k8s_learning).XML
.
Install Kubernetes with kubeadm
- Use
~/_Lecture_k8s_learning.kit/ch1/1.6/WO_master_node.sh
to install kubernetes in master node. - Open to worker node #1, #2 and #3 and use script of
~/_Lecture_k8s_learning.kit/ch1/1.6/WO_worker_node.sh
.
- Open your
~HashiCorp\_Lecture_k8s_learning.kit-main\ch1\1.5\k8s-min-5GiB-wo-add-nodes>
folder and usevagrant destroy -f
to delete all VM. - We need to delete all VM to upgrade our VM.
IDE 2
- Update Kubernetes, Docker and ContainerD.
- Upgrade memory in master and worker nodes.
Deploy Kubernetes VM
- Open your
~HashiCorp\_Lecture_k8s_learning.kit-main\ch2\2.1\k8s-UpTo-10GiB
folder in command and usevagrant up
.
Definitions
Object
- Container
-
Container has one software or system.
- Pod
- Pod has one or union of containers.
- Pod has a volume to save eternal data.
- Deployment
- ReplicaSet
- Deployment needs ReplicaSet to manage count of pods
- Honestly, the code is really similler with deployment, but we need ReplicaSet for rolling.
- For example, when you upgrade a pod, Deployment will create ReplicaSet, and ReplilcaSet will duplicate itself.
- Job
- You can use job to decrese using memory.
restartPolicy
default value in other object isAlways
and it will restart the object forever.restartPolicy
should be in job and this value should beOnFailure
orNever
.
- Use
completions
to run to sequentially.
- Use
parallelism
to run to parallely.
- Use
activeDeadlineSeconds
to delete on specific time after your command
- Use
ttlSecondsAfterFinished
to delete on specific time after completed
- CronJob
- Use
CronJob
to run Job with schedule.
- cron rule :
*/#
repeats the job # periods, and just#
repeats the job at #.
successfulJobsHistoryLimit
hold the job until specific number. After the limited number, it will delete first job automatically. Default value is 3.
-
Use
k get po | wc -l
to get total pods count. - DaemonSet
- DaemonSet makes one pod on each nodes.
- DaemonSet is quit simillar with deployment, but it has no replicas, because one pod can include only one DaemonSet.
- Use
vagrant up w4-k8s-1.22
to make fourth worker node.
- Use
vagrant destroy -f w4-k8s-1.22
to destroy fourth worker node.
-
When you add node, DamonSet will be created automatically from code.
- StatefulSet
- StatefulSet saves state of pod.
- You should use
serviceName
, because StatefulSet has specific name, not hash value.
- Application
- Pod(s) containing container(s) and volume for specific work is(are) an application.
- For example, NGINX, MySQL, etc.
- Even when you add something on the application, that is also an application.
Commands
- get
-
Read object
- run, create, apply
- Create object
- delete
-
Delet object
- exec
- Access to container in pod.
- scale
- Add or sub count of pods.
- edit
- Change deployed object.
- events
- Check events with namespace.
- describe
- Check status of object.
- logs
- Check log.
- Log is worten when deploy is successed.
yaml
- -o yaml
- Read yaml code.
- –dry-run=client
- Run yaml code to read.
- command
- Use command in yaml file to run specific command.
- multiple commands
- Use
&&
to run multiple commands at once.
- Use
;
to run multiple commans step by step.
- Use
|
to separate command lines.
- Use
arg
to separate config and commands.
Expose Deployed Application
- We don’t use HostPort and HostNetwork, because we should know where the pods is running.
Port-forward
- We have host port and guest port. When we use host port, then this host port will be changed to guest port.
- For example, our host port is 60010, and when we connect to 60010, then our master node will change 60010 to 22 and connect to 22.
k port-forward fwd-chk-hn 80:80
means, we will open 80 with specific address and it will be changed to 80.k port-forward --address 0.0.0.0 fwd-chk-hn 80:80
means, we will open 80 with all address and it will be changed to 80.
HostPort
- Outside users should know, which node they will connect.
- For example, 8080 is second worker node host port, and it will change 8080 to 80.
HostNetwork
- Outside users should know, which node they will connect. And they connect directly that port.
NodePort
- User will connect 30000 node and node will connect to 80 service and service will connect to 80 pod.
LoadBalancer
- We will use matalib instead of NodePort.
ExternalName
- ExternalName has no Deployment because it uses external name to service.
- ExternalName is matched metadata’s name.
ClusterIP
- ClusterIP exposes Deployment or Pod.
Headless
- Headless exposes Deployment or Pod without IP.
- Headless can communicate with domain name, without IP and connect to StateFulset with domain name.
- StateFulset matches service Name to connect Headless.
- When you use StateFulset with LoadBalancer, each External IP calls can show different pods. Therefore, I recommend, using StateFulset with Headless.
EndPoint
- When you create Deployment and LoadBalancer together, EndPoint is also created.
- You can create EndPoint independently.
- Create Service first as ClusterIP and create also EndPoint with service name and LoadBalancer IP. As a result, you can call EndPoint with service name and EndPoint is binded with LoadBalancer IP, like double binding.
Ingress
- Ingress cannot exist without service.
- Ingress has routing information and service routs the app.
-
Services
-
deploy-nginx
- deploy-hn
- deploy-ip
- Ingress
- with NodePort
- with LoadBalancer
Label vs Annotation
- Label is for human and annotation is for system.
Volume
emptyDir
- emptyDir is shared memory in pods.
- In this example, we will create 2 containers.
- After creating each of first page, this first page gonna be empty-directory.
- This created empty-directory will be a volume and this volume connects 2 containers.
- We called container 2 with IP but actual running pod is container 1.
hostPath
- hostPath can use Node directories.
- It connects
/var/log
and/host-log
with namehostpath-directory
. - When Deployment is released on Node, Node can take diffrent amounts of Deployments.
- So it’s difficult to make a hostPath on one Node.
- DaemonSet is created on one Node. It means our hostPath is separated fairly on Nodes.
NFS
- Network File System
PV & PVC
- accessModes
- ReadWriteOnce(RWO) : Read and Write on only one node.
- ReadOnlyMany(ROX) : Read on several nodes.
- ReadWriteMany(RWX) : Read and Write on several nodes.
- Block Storage uses RWO and ROX and Object Storage uses RWX.
PV
- Persistent Volume
- persistentVolumeReclaimPolicy
- Retain : retain PV even you delete PVC
- Delete : delete PV when you delete PVC
PVC
- Persistent Volume Claim
StorageClass
- Persitent Volume Claim make StorageClass first, and then StorageClass make a Persistent Volume.
- Provisioning
- Static : nfs, PV & PVC
- Dynamic : StorageClass
- In nfs mode, administrator should create pv everytime when it is requested.
- In PV & PVC mode, administrator should prepare always PV before user uses pvc.
- In StorageClass mode, administrator don’t need to prepare pvc before, because StorageClass will create PVC automatically everytime when it is requested.
- provisioner
-
NFS server and NFS client Root should have same values.
-
StorageClass
- PVC
- Deployment
vol
- volumeClaimTemplates
- This is a volume type only for StateFullSet.
- StateFullSet has Status and independent Domain (with Headless). It means, StateFullSet accesses Pod independently.
- Therefore, each pod has unique value and status.
- When this pod claims volumeClaimTemplates, it creates independent PV.
- As a result, When you deploy with StateFullSet, your Pod has independent Domain, volumeClaimTemplates and PV.
Node Contributions and Management
Cordon
- When you use cordon in specific node, that node is not affected from scheduling.
- w3-k8s is not updated, cause we set it with cordon.
Drain
- Drain moves original pod to other nodes and set cordon on the node.
- Set drain on pod to maintanance or when the pod can occur some error.
- At first, you will get this error to drain, because daemonset pod cannot be deleted.
- So you should use
--ignore-daemonsets --force
- And now you can see, we missed one pod
net
.
nodeName
- Use nodeName to set where your pod should be deployed.
nodeLabel
- With nodeLabel and nodeSelector you can release several pods at once.
- Use
k get node --show-labels
to see labels of nodes - Use
k label node [node] [label]
to add label on specific node
- Use
k get node -l [label]
to search nodes with label =
symbol on label seperate key(left) value(right) and of course you can use only one when you search.
- Use
k label node [node] [label]-
to delete label on node
nodeSelector
- Use nodeSelector to set, in which node the pod should be deployed
nodeAffinity
- Use nodeAffinity to set more felexible confitions.
- There is two options to set: requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution
- Operators:
- In vs NotIn
- Exists vs DoesNotExist
- Gt vs Lt
Taints & Tolerations
- Effect
- NoSchedule : Only deploy with Telerations
- PreferNoSchedule : When there is no more nodes to deploy, ignore Taints setting
- NoExecute : Reschedule and delete pods which has no telerations
- When key has kaster, then DaemonSet can be deployed in master also.
- Pod cannot be deployed on w3-k8s, because w3-k8s has taint and pod has no teleration.
- Pod cab be deployed on w1-k8s, w2-k8s and w3-k8s, because pod has teleration.
- Above code is for deleting taints on nodes.
- Or you can jsut rerun nodes to delte taints.
Pod Composition and Management
Label
- Same with node label.
run=nginx
is created fromkubectl run
andapp=nginx
is created fromkubectl create
.
- Use
k label pod [pod] [label]
to create custom label on pod.
Static Pod
- Static Pod deploys etcd, controler manager and scheduler.
- Kubelet read yaml file and create api, etcd, customer manager and scheduler.
- Use
cp [Your Code Path] [Target Path]
to copy yaml code - Use
scp [Your Code Path] [Target Node]:[Target Path]
to copy yaml code on other node - Use
rm [Target Path]
to remove copied yaml code - You can only remove yaml code on accessed node. It means, to delete copied yaml code on other node, you should access other node.
restartPolicy
- Options
- Always : always restart
- Never : never restart
- OnFails : only restart when it failed
- Typo makes also restarting when the pod has OnFails option.
- Deployment only accept Always option, because Job is only once but deployment is continue.
Probe
- startupProbe
- Probe container status
- Kill container and execute with restartPolicy
- livenessProbe
- Probe container’s action
- Kill container and execute with restartPolicy
- readinessProbe
- Probe containers’s application whether can resolve requests
- Unpass traffic
livenessProbe
- Check Options
- exec : Execute container’s command
- httpGet : Get response from HTTP GET command
- tcpSocket : Check container’s address or port is alive
watch "kubectl describe po liveness-exec | tail"
show changing continue- This cannot be worked, because initialDelaySeconds is 10 and preiodSeconds is also 10. It will repeat delay and initial infinity.
readinessProbe
- This application will not be killed because readinessProbe doesn’t rerun the application.
- readinessProbe just remove the endpoint.
- How readinessProbe remove entpoint
- How readinessProbe resume entpoint
startupProbe
- startupProve is not used be alone because this is for bootup check.
Init Container
- InitContainer make easier contratuctor for pod.
- pod initializing is from initContainer.
Multi Container
Sidecar
- First container make Web page and second container make server(e.g, NginX).
- This second container presents first conainer’s web page.
Ambassador
- Second container is Proxy server and this second container takes over to present first container.
- It means, second container communicates with extern servers.
Adapter
- First container make data and second container translate this data.
- Second container expose this translated data to external.
- In this case, first container is NginX(server) and second container is Prometheus(translator).
Pod Affinity and Anti-Affinity
- You can use Pod affinity to group your pods.
- You can use Anti-Affinity to exclude your pods from groupping pods.
Affinity
- Pod will be deployed on w1 always.
- Newly created Pods will be deployed on w3 always.
Anti-Affinity
- Anti affinity will deploy pods, where has no affinity.
- In this case, this pods will be deployed on w2, because w1 and w3 has affinity already from previous commands.
TopologySpreadConstaints
- TopologySpreadConstaints can group pods with balance eventhough specific situations.
- At first, cluster read the count of nodes and set this all nodes as region.
- Then devides this nodes and set this devided noodes as zone.
- Before we practice, we need one more worker node to make even.
- This will create label on each node(e.gregion and zone).
- w2 has some pods before we try this command, so topology devide like, 2 pods on w1, 1 pod on w3 and 1 pod on w4.
- This will make 12 pods on w3.
- And rerun topology, then it will devide like, 2 pods on w1 and 2 pods on w4.
- Please remove w4 in virtual box!
Cluster Management
Access Control
RBAC(Role-Based Access Control)
- Node : set access permission from kubelet of scheduled node.
- ABAC : Attribute-based access control
- RBAC : set access permission from role.
- Webhook : Based on HTTP Post get Payload and control Authorization.
-
Set role with behavior permission and set role group.
-
Context(Kubernetes Cluster)
- dev1 is EKS(AWS).
- dev2 is AKS(Azure).
- dev3 is GKE(Google).
-
Context makes cluster and has access control data.
-
Practice
- Create namespace and account for dev1, dev 2 and cluster
- Create Role and bind this role and account for dev1
- Role dev1 has get and list permission. So error is occured, when it try to create.
- Create Role and bind this role and account for dev2
- Role dev2 has get, list and create permission. So error is occured, when it try to delete.
- Create Role and bind this role and account for cluster
- Role cluster has every thing on verb but it is accepted for pods, deployments, and deployment scale. So when it try to use service, it occures error.
-
If you want to find with specific word, use “ grep”.
Resource Management
Resource Quota
- Limitation for resource
-
So when we try to take attributes with more than limited value, it occures error.
-
Error with storage
- Error with pvc
- Error with pods
- It will try to make 11 pods but error is occured when it try to make 11th pod.
- 176.jpg command is ‘k describe -n dev1 replicasets.apps quota-pod11-failure-7fc88499c7’
LimitRange
- LimitRange is effective object to process objective requests.
- Pod’s minimum memory size would be 256 Mi and maximum would be 512 Mi.
- PVC’s minimum memory size would be 1 Gi and maximum would be 2 Gi.
-
Like this, we can set LimitRange in namespace.
- G vs Gi
- For example, 5G means 5 Gigabytes while 5 Gibibytes.
- 5 G = 5000000 KB / 5000 MB
- 5 Gi = 5368709.12 KB / 5368.70 MB
- Now, dev2 has Limit Ranges with minimum 256 Mi and maximum 512 Mi.
- And when we request more resource on that namespace(our case is dev2), it occurs error like below.
- But when we reauest appropriate resources, it will be created on namespace well.
- Extra experimence
- Worker nodes have 1.5 Gi per each.
- We will test on w3-k8s to create 6 pods with minimum size 256 Mi.(6 x 256 Mi = 1.5 Gi)
- It should occur out of memory error because total needed memory is bigger than 1.5 Gi. (ex. pod memory)
- So please delete ASAP if you tested this code.
Network Policy
- Ingress Traffic : Traffic getting in to server through firewall
- Egress Traffic : Traffic getting out from server through firewall
Network Policy in Kubernetes
- Ingress : set direction of netrowk
- Before you run experiments, check you have net tools.
- If you don’t have net tools, run this
0-1-net-tools-ifn-default.yaml
and0-2-net-tools-ifn-dev[1-2].yaml
first.
- Experiment 1 : Deny all
- if label’s role is sensitive, then deny all.
- This blocks every network transport in the pod.
- Because Ingress and Egress are just declared and there is no description for transporting.
-
It doesn’t work to get in the server, so please delete pods and policy after experiment.
-
Experiment 2 : Process with machted label
- if label’s role is internal, Ingress and Egress will be processed with mached label.
- In this case, Ingress will get in through chk-info app and Egress will get out though chk-info app.
- Those pods can connect with only each others, it means transport occurs only inside of them.
- That is the reason, why we cannot connect the pod outside.
-
Please delete pods and policy before going to next expriment.
- Experiment 3-1 : Using IP block as criterion
podSelector:{}
means there is no criterion for label. So it will accept every labels.- In this case, Ingress will get in through ip 172.16.0.1 - 172.16.255.254 and Egress will get out through ip 172.16.0.1 - 172.16.127.254.
- So half of IPs cannot transport.
- I can use only
172.16.103.132
, others are over 127. - I executed net pod first with
k exec net -it -- /bin/bash
and then connected withping 172.16.103.132
. - Of course, I cannot connected with
ping 172.16.221.133
because of the policy. -
Please delete pods and policy for the next.
- Experiment 3-2 : Using IP block as criterion
- You can also except a specific IPs for transporting with
except
. - That excepted node should not be transported.
- In my case, I blocked
172.16.132.0/24
for Ingress and Egress. - You can also change above code with
vi
command. -
Please delete pods and policy for the next.
- Experiment 4 : Using namespace as criterion
- In this case, Ingress is only getting in through dev2.
- So it doesn’t work, when you are not in dev2.
- But it works, when you are in dev2 because of the policy.
- Please delete pods and policy for the next.
Application Construction and Management
- Version upgrade
- Auto-Scale
- Deployment with Web UI
ConfigMap
- ConfigMap is for additional or changiable settings and messages.
- If ConfigMap is changed, then environment variables gonna be also changed.
- Metallb can choose IP range by ConfigMap in load balance service.
Secret
- for security(ex. id or password)
- Secret Types
- This endoding keys can be decoded by
echo {your key} | base64 --decode
- Secret and ConfigMap can be changed with
kubectl edit
. - It means, when pod is dead and reveal, Secret and ConfigMap can be changed automatically.
- If Secret and ConfigMap should not or will not be changed, use
immutable
.
- After deploy secret, you cannot change this file again.
Roll Out
- Rolling Update
- Kubernetes updates 1 by 1 to continue services.
- update with unexisting version.
- update with specific version.
Kustomize
- Dynamic deplyoment
- Metallb can deploy several application at once.
Kustomize process
- We can change dynamically kustomization.yaml file with sources by
kustomize create
. - And then we can create upgrade build file with
kustomize build
. - Finally you can upgrade with
kubectl apply -f
.
- Delete MetalLB
- Check MetalLB is deleted and install kustomize
- Create kustomize and edit version
- Deploy MetalLB
- Chcek MetalLB is deployed
helm
- helm make dynamic deploy easier.
- helm > kustomize.io > kubectl
- User’s work with helm is just setting storage and intalling release
- Install helm
- Set storage
- Delete oldest metalLB and NFS provisioner because we will use only helm
- Install metalLB with helm
- This is
metallb-installer-by-helm.sh
- helm will install metalLB with namespace
metallb-system
and image version0.10.2
. - helm will apply config with this.
- Install metalLB with helm
- This is
nfs-provisioner-installer-by-helm.sh
- Uninstall metalLB with helm, if you need.
Metrics-Server
- Mornitoring resources, like CPU and memory.
- Each kubelet in worker node sends measured value to Metrics-Server.
- And user can get measured value from Metrics-Server with API.
- If current system has not kustomization,
kubectl apply -k .
will read and excute all referenced kustomization files. - You can see measured value by
k top
HPA (Horizontal Pod Autoscaler)
- Sync with Metrics-Server
- Scale pods automatically
- Applications will be managed automatically depends on kubernetes resources’ status.
- HPA is for linear load increasing or predictable load.
- API Server sends measured value about pods to HPA.
- HPA applies resources to appropriate deployment when they has to be needed.
- Active HPA with below code or command
k autoscale deployment deploy-4-hpa --min=1 --max=10 --cpu-percent=50
- You can create below code by
k autoscale deployment deploy-4-hpa --min=1 --max=10 --cpu-percent=50 --dry-run=client -o yaml
- Set load
Excersice
- Apply deployment and autoscale.
- Monitor pods by
watch kubectl top pods --use-protocol-buffers
- Monitor HPA by
watch kubectl get hpa
- Set load
- Stop load
kube-dashboard
- You can do all commands on Web UI.
Excercise
- Check kubernetes-dashboard service IP.
- Open Web UI with kubernetes-dashboard service IP.
- Click skip for authorization.
- Create a pod with left-top
+
button. - Click deploy button.
- Click Pods in bread.
- Click three dots button to see information of pod-web.
- Click deployment
- Delete pod-web deployment with three dots button.