![[none]]
Posts tagged: containers
All posts with the tag "containers"
You can give k3s an install channel to install stable, latest, or specific versions like 1.26. This is handy to make sure that you install the same version on all of your workers.
For my reader app I am using cronjobs to schedule my a new build and upload to cloudflare pages every hour. In this example I have built a docker image docker.io/waylonwalker/reader-waylonwalker-com and pushed it to dockerhub. It uses a CLOUDFLARE_API_TOKEN secret to access cloudflare, and the entrypoint itself does the build and upload.
kubeseal is a pretty simple to get started with way to manage secrets such that they can be stored in a git repo and be picked up by your continuous delivery service.
Sealed Secrets provides declarative Kubernetes Secret Management in a secure way. Since the Sealed Secrets are encrypted, they can be safely stored in a code repository. This enables an easy to implement GitOps flow that is very popular among the OSS community.
In my homelab kubernetes cluster I am using kubeseal to encrypt secrets. I have been using it successfully for a few months now wtih great success. It allows me to commit all of my secrets manifests to git with out risk of leaking secrets.
You see kubeseal encrypts your secrets with a private key only stored in your cluster, so only the cluster itself can decrypt them using the kubeseal controller.
https://sealed-secrets.netlify.app/
Installation happens in two steps. You need the kubernetes controller and the client side cli to create a sealed secret.
...
kubernetes 6 months in
I stumbled into kubernetes December 2023 when I was looking for a better way to self host applications. I was looking for something that didn’t require logging into a server and building and deploying like a cave man. I wanted a smoother experience than docker compose was giving me.
https://waylonwalker.com/looking-for-a-heroku-replacement/
This post turned into a list of tools that I have adopted into my k8s workflow, and plan to keep. enjoy.
...
What is the difference between health, liveness, readiness, and startup? This article does a great job at a full writeup description of how it works in kubernetes, here is my TLDR.
health 200 OK - I’m still responding to requests
health ERR - something happened and I cant respond to requests
liveness 200 OK - I’m ready for more work
...
The convention of “z-pages” comes from google and reduces the likelihood of collisions with application endpoints and keep the convention across all applications.
I’ve been using this for a few weeks now and it’s fantastic. It’s reminds me of lazygit, it gives a nice quick interface into the things I need and it just works. Yes I can git status to see what changed, then diff the files, then commit hunks, but lazygit can do that in just a few keystrokes. lazydocker does this for docker. It gives me a nice view into whats running, what’s eating up disk space, and the networks I have. And if I see I have a bunch of exited containers, there is a bulk command righ there to clean them up.
tldr docker ps on steroids
Uptime kuma is a fantastic self hosted monitoring tool. One docker run command and you are up and running. Once you are in you have full control over checking status of urls, frequency, allowed timeouts, and a HUGE list of notification providers
docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name uptime-kuma louislam/uptime-kuma:1
I deployed it in my homelab today.
I am converting my docker compose env secrets over to k8s secrets. This guide was clear and to the point how I can replicate this exact workflow. First set the secret, the easiest way is to use kubectl wtih –from-literal because it automatically base64 encodes for you. If you don’t use the Once you have your secret deployed, you have to update the container spec in your deployment manifest to get the valueFrom secretKeyRef. Wow, shocked at these results. All this time I’ve been told and believed that k8s is incredibly hard, and you need a $1M problem before you think about it because it will take a $1M team to maintain it. So far my experience has been good, and I definitely do not have a $1M problem in my homelab. I was looking to add running kubernetes jobs to a python cli I am creating, and I found this solution, mostly thanks to This is a sick kubernetes architecture diagran generation tool. Here is an example Running your own docker registry in one line Example of how to add a pvc to a deployment. I was curious to see what was going on inside of my minio object storage. Great technique here by Frank to create an inspector pod, then you can do as you wish with the data. I created the manifest as Then used it like this. In order to use k8s secrets manifest you first need to encode the data values. Right after installing k3s you are going to need to use To do this I used
kubectl create secret generic minio-access-key --from-literal=ACCESS_KEY=7FkTV**** -n shot --from-literal you will have to base64 encode it.echo "7FkTV****" | openssl base64 ollama run mistral:7b-instruct-q4_K_M and my loose understanding of what the yaml syntax is supposed to look like for a kubernetes job. This will let me create a job in the cluster, choose the image that runs, the command that is called, and how long until the job expires and is cleaned up. While the job still exists I can go in and look at the logs, but once its ttl has expired they are gone.kompose is a sick cli to convert docker-compose.yml to kubernetes manifest.pvc-inspector.ymlapiVersion: v1 kind: Pod metadata: name: pvc-inspector spec: containers: - image: busybox name: pvc-inspector command: ["tail"] args: ["-f", "/dev/null"] volumeMounts: - mountPath: /pvc name: pvc-mount volumes: - name: pvc-mount persistentVolumeClaim: claimName: pvc-name sudo to use any kubectl command. The reason for this is that the default config is owned by root. To get around this you will need to make your own config and set the KUBECONFIG environment variablesudo one last time to copy the k3s.yaml file into my own directory and take ownership of it.