Date created: Friday, November 4, 2022 10:03:04 AM. Last modified: Thursday, February 1, 2024 3:09:28 PM

Kubernetes

Copy

Copy a file to/from a remote container:

$kubectl cp namespace1/pod1:/a-file-name.txt ./a-file-name.txt
$kubectl cp local_file namespace1/pod-name:/remote_file -c container-1

 

Exec

Start a bash shell in a specific pod:

$ kubectl -n n1 exec pod1 -i -t -- /bin/bash

 

Start a bash shell in a specific deployment/pod:

$ kubectl -n n1 exec deploy/d1 -i -t -- /bin/bash

 

Start in a specific container of a specifc pod:

$ kubectl -n n1 exec deploy/d1 -c c1 -i -t -- /bin/bash

 

Reference a variable within the container:

$ kubectl exec deploy/d1 -n n1 -- /bin/sh -c 'echo $HOSTNAME'

 

Set an infinite loop as the entrypoint in a deployment:

containers:
- image: foo:latest
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]

 

Exec-Node

Start a shell on the node using the node-shell plugin: https://github.com/kvaps/kubectl-node-shell

$ kubectl -n ns1 describe pod pod123 | grep -i node
$ kubectl-node_shell --kubeconfig ~/.kube/cluster.yml node1

 

Deployments

List deployments

$ kubectl get deployments -A
$ kubectl get deployment d1 -n n1

 

Scale deployments:

$ kubectl scale -n n1 --replicas=0 deployment/d1

 

Kustomize

Generate the config that Kustomize would apply:

$ kubectl kustomize ./

 

Diff it against the running config:

$ kubectl kustomize ./ | kubectl diff -f -

 

Apply it:

$ kubectl apply -k ./

 

Remove it:

$ kubectl delete -k ./

 

Logs

Get the logs from a crashed or failed pod:

$ kubectl -n n1 logs -p --tail 10 deploy/pod1

 

Get the logs from a running pod:

$kubectl -n n1 logs -f deployment/pod1

 

Jobs

Get all jobs:

$ kubectl get jobs -A

 

View details of a specific job:

$ kubectl -n n1 describe job j1

 

View job logs:

$ kubectl -n n1 logs jobs/j1

 

Create a job by copying another, this will run immediately (this only works for conjobs):

$ kubectl -n n1 create job j2 --from=cronjob/j1

Jobs run in a pod, to view the logs of a cronjob, view the logs of the pod (Note that after the job has completed, the pod is deleted, and thus the logs too):

$ kubectl -n n1 create job j2 --from=cronjob/j1
$ kubectl -n n1 describe job/j2 | grep "Created pod"
  Normal SuccessfulCreate 16s job-controller Created pod: j2-scwn2
$ kubectl -n n1 logs pod/j2-scwn2 -f

 

Create a job by copying the original, deleting the original, then re-applying it (this works for non-cronjobs):

$ kubectl -n n1 get job j1 -o json | jq "del(.spec.selector)" | jq "del(.spec.template.metadata.labels)" > /tmp/j1.json
$ kubectl -n n1 delete job j1
$ kubectl -n n1 create -f /tmp/j1.json
$ kubectl -n n1 get job j1

 

Create a job by exporting one, editing it, them importing the file, this will run immediately:

$ kubectl -n n1 get job j1 -o yaml > /tmp/j2.yml
$ vi /tmp/j2.yml
$ kubectl -n n1 create -f /tmp/j2.yml

# See the job pod ran here
$ kubectl -n n1 get pods

# See the job/pod logs
$ kubectl -n n1 logs pods/j2

# Delete the pod in order to import the job again
$ kubectl -n n1 delete -f /tmp/j2.yml

 

Namespaces

Get all namespaces:

$ kubectl get namespaces

 

Dry run creating a name space:

$ kubectl create namespace n1 --dry-run=server
$ kubectl create --save-config namespace n1
$ kubectl get namespaces n1

 

Nodes

View node details:

$ kubectl describe nodes
$ kubectl describe node node123

 

View node usage:

$ kubectl top node

 

Node Un/Draining

1 (Optional). Cordon off a node (meaning the scheduler will not schedule any new deployments to this node):

$ kubectl cordon node1

2. Evict pods from the node (they will be gracefully stopped and started on another node, ensure the other nodes have capacity!) .This will cordon off a node if it is not already cordoned:

$ kubectl drain node1

Potentially useful/dangerous options for drain:

$ kubectl drain node1 --grace-period 0

$ kubectl drain node1 --force

3. Remove cordon from a node (allow pods to be scheduled on the node again):

$ kubectl uncordon node1

 

Pods

Get the pods from all namespaces:

$ kubectl get pods -A

 

Get pods in a specific namespace:

$ kubectl -n ns1 get pods

 

View the status of a pod deployment in a specific namespace:

$ kubectl -n n1 get pod p1

 

Get detailed status of a pod deployment:

$ kubectl -n n1 describe pod p1

 

List containers in a pod (this can also be seen in the above command):

$ kubectl -n n1 get pods p1 -o jsonpath='{.spec.containers[*].name}'

 

Show pod details:

$ kubectl get pods -o wide
$ kubectl get pods -o wide -n n1
$ kubectl get pods -o wide -A

 

View pod usage:

$ kubectl top pod

 

Show the logs for a pod:

$ kubectl logs -n n1 -p p1

 

Restart a pod (pods can't be restarted, if this is really needed, scale them down to zero and then back up again):

$ kubectl scale -n n1 --replicas=0 deployment/d1
$ kubectl scale -n n1 --replicas=1 deployment/d1

 

Port Forwarding

Port forward localhost:8080 to 443 on the remove serviec:

kubectl -n namespace1 port-forward svc/my-server 8080:443

 

Probes

A livenessProbe checks if a container is alive/running. If this probe fails kubelet see's the container as unhealthy and restarts it. This is a continuous probe.

A readinessProbe checks if a container is healthy/ready to serve incoming traffic. If a readiness probe fails incoming traffic is not sent to the container until the check is passing again. This is a continuous probe.

A startupProbe performs a check only at container startup time, after it passes it is not run any more.

 

Resources

Show pod usage:

$ kubectl top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
n1 p1-7599c44747-m4fr5 1m 26Mi
n1 p2-7d9466748-r7cbm 1m 26Mi
n2 p1-d77bbf4cb-mb55x 1m 21Mi
n2 p2-d8bdb5b9d-w5vqt 0m 3Mi

Show node usage:

$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
nodepool-1-node-2 99m 5% 3149Mi 60%
nodepool-1-node-2 77m 4% 2843Mi 55%
nodepool-2-node-1 100m 5% 2838Mi 54%

 

Run

Run an image and drop into a shell:

$ kubectl run temp-cli --image=foo/bar:latest --restart=Never --rm -i --tty -- /bin/bash
# or
$ kubectl run temp-cli --image=bitnami/kubectl:latest --restart=Never --rm -i --tty --command -- /bin/bash

If there is an error the container isn't auto-deleted, then use:

$ kubectl delete pod/temp-cli

 

Secrets

Generate a sealed secret for a docker registry:

# First create a secret locally on the client
$ kubectl create secret docker-registry my-token-123 \
--docker-server=registry.gitlab.com \
--docker-username=gitlab+deploy-token-abc123 \
--docker-password=s3cr3tp4ssw0rd \
--namespace n1 \
--dry-run=client \
-o yaml > secret.yaml

# Grab the cluster public key
$ kubeseal --fetch-cert > public.pem

# Encrypt the secret with the clusters public key
$ cat secret.yaml | /opt/kubeseal/kubeseal \
--controller-namespace kube-system \
--controller-name sealed-secrets-controller \
--namespace n1 \
--scope cluster-wide \
--format yaml \
--cert=public.pem > sealed-secret.yaml

To view a docker JSON secret like the above example:

$ kubectl -n n1 get secret s1 -o jsonpath="{.data.\.dockerconfigjson}" | base64 -d

 

Generate a sealed secret from a file:

$ kubectl --kubeconfig ~/.kube/config \
create secret generic my-secret-name \
--from-file=secret_file_name.conf=./input_file_nme.conf \
--dry-run=client \
-o yaml > secret.yaml

$ kubeseal --kubeconfig ~/.kube/config \
--fetch-cert > public.pem

$ cat secret.yaml | kubeseal --kubeconfig ~/.kube/config \
--controller-namespace kube-system \
--controller-name sealed-secrets-controller \
--scope cluster-wide \
--format yaml \
--cert=public.pem > sealed-secret.yaml

To view a key/value secret like the example above:

$ kubectl -n n1 get secret s1 -o jsonpath="{.data.MY_KEY_NAME}" | base64 -d
$ kubectl -n n1 get secret s1 -o json | jq ."data"."MY_KEY" | tr -d '"' | base64 -d

 

Generate a sealed secret for ENV vars from a file:

$ kubectl --kubeconfig ~/.kube/config \
create secret generic my-env-vars \
--namespace ns1 \
--from-env-file=.env \
--dry-run=client \
-o yaml > secret.yaml

 

Wait

Pod conditions are listed in the pod lifecycle description.

  • PodScheduled: the Pod has been scheduled to a node.
  • PodHasNetwork: (alpha feature; must be enabled explicitly) the Pod sandbox has been successfully created and networking configured.
  • ContainersReady: all containers in the Pod are ready.
  • Initialized: all init containers have completed successfully.
  • Ready: the Pod is able to serve requests and should be added to the load balancing pools of all matching Services.

Wait can be used to wait for a pod to reach a certain condition. Services don't have a condition so wait can't be used with a service.

 

Wait for "condition=ready" on a pod, specifically pod p1 "-l app=p1":

$ kubectl wait -n n1 --for=condition=Ready pod -l app=p1 --timeout=2m

 

Waiting for a deployment like the following example only waits for the initial deployment, it doesn't work after the deployment has been deployed once (note that by default wait is waiting for a condition to =true, so this is not needed below, but it is needed if you want to wait until condition=Foo=false):

$ kubectl wait -n n1 --for condition=Available=True deployment d1 --timeout=30s