Browsed by
Category: cloud

K8S Tiller/Helm History Cleanup

K8S Tiller/Helm History Cleanup

GitHub Project: https://github.com/atkaper/k8s-tiller-history-cleanup Introduction WARNING: This is about helm 2, in the old days… (2019) Please move to helm 3+, and do not use this old cleanup anymore 😉 Article just kept for historic purposes. In our on premise Kubernetes cluster, we use Helm for a big part of our application / micro-services deployments. Helm uses an engine called Tiller (which is a deployment in the cluster). It executes the installs / updates / deletes, and it stores the results…

Read More Read More

Docker Log Plugin

Docker Log Plugin

A quest I took at 2 November, making my own docker log plugin. Suitable to share with everyone. At the office, we are mixing docker-only hosts, with on-premise kubernetes clusters. For example our databases do run better / more stable on plain docker hosts, with no kubernetes intervention. That’s mainly due to the sub-optimal distributed storage in our on-premise cluster. But… we do want to send all of the logs from both kubernetes and docker hosts to a central logging…

Read More Read More

K8S Check Certificate Chains

K8S Check Certificate Chains

Github Project: https://github.com/atkaper/k8s-check-certificate-chains Ingress/nginx (running in Kubernetes / K8S) does not like silly certificates, therefore I created two scripts to find wrong ones. The script get-all-k8s-certificates.sh retrieves all certificates from kubernetes, and the check-certificate-chains.sh script verifies the chain’s are complete and in proper ordering. Added md5 hash check on crt and key file to verify the two belong to each other. The check-certificate-chains.sh just reports on WRONG certificates. Run with “-v” option to also show OK’s. Thijs.

K8S Network Test Daemonset

K8S Network Test Daemonset

Github Project: https://github.com/atkaper/k8s-network-test-daemonset Description An on-premise K8S (kubernetes) cluster needs a proper working virtual network to connect all masters and nodes to each other. In our situation, the host machines (vmware redhat), are not all 100% the same, and can not easily be wiped clean on new K8S and OS upgrades. Therefor we sometimes experienced issues in which the nodes or masters could not always reach each other. We did use the flannel network, which often caused weird issues. We…

Read More Read More