Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

The Big Plan: Change My VM to Be Gitops Driven

Intended Audience

Me!

Update 2025-12-19: All done!


I just finished a move from one Hetzner VM to another.
The type of VMs are the same, in fact.
It’s just that the new VM and all the software on it are entirely software driven. I kept logging my progress in my notes, copy pasting the plan from day to day and ticking things off.
Now that it’s done and I still want to refer to it regularly, as the rest of the services come over, I wanted a place to keep it. And so this post, it is.


List of services that absolutely need to come over. Miscellaneous stuff later.

  • the main domain
  • french version of the website
  • the mastodon archive
  • the email distribution list
  • miniflux for rss feeds
  • joplin
  • baikal
  • discourse (no more discourse!)
  • markdown editor (hedgedoc)
  • anki
  • huginn
  • syncthing
  • IRC: theLounge + znc (see if we can make do with a single service now (2025-12-15: we could!))
  • kanboard
  • Certs, Move them over, or figure out a way to generate and renew them via Ansible

The Big Point of The Big Plan

  • Save time and energy. Managing all the disparate services I use is taking more and more of my time. I need to claim that back, while being able to use said services.
  • Be gitops driven. Managing stuff gets easier. Tearing down things and rebuilding them gets easier
  • Have most everything I use, be in a Kubernetes cluster.
  • Be pragmatic enough to know that everything cannot be in a Kubernetes cluster and will have to live in the root VM
  • Have Flux CD manage everything in the cluster
  • Have Ansible Pull manage everything in the VM, acting as my single node. The point of doing this is not idempotency, rather to have everything in code; something that I can comment and uncomment and manipulate at will, something I can update at will and something that is documented. Never again will Future Jason have to scratch his head about, just how to go about doing something. (Long term note to self: Have the discipline to write tasks and drive everything with Ansible, despite the ease of “just doing it at the command line”)

The Big Plan (Done! 🎉🎉🎉)

  • The plan is to redo the cluster again and do my own instance of
    • K3s
    • Sealed Secrets
    • Flux CD
    • Certmanager (Not using it)
    • Letsencrypt (using pre existing Letsencrypt certs)
    • Get Traefik Ingress to work
    • Figure out a way to get certs automatically into the cluster
  • And once that is done, figure out an app to move (Miniflux or Hedgedoc?); 2025-12-03: Kanboard it is!
  • Begin by moving (lifting and shifting in popular parlance) Kanboard to the cluster
    • Cert will probably be needed (Wildcard cert works now, just like it does without the cluster)
    • Convert a docker-compose to kubernetes manifests
    • Learn how to configure an app with code
    • Learn how to store data and back it up
    • Figure out secrets, if there are any (for now sealed secrets ok, figure out vault and vault injection later)
    • Learn how to tunnel through and reverse proxy
    • Make Kubernetes manifests work with flux
    • Figure out how to automate deployment of manual manifests
    • Figure out how to migrate there if there is any in an old app
    • Figue out how to automate updation of images in manual manifests
    • Get another app (Miniflux) deployed
    • Figure out what needs to happen as part of the lifecycle. What you want in the cluster, what stays out, do they intersect, how do updates of cluster happen? VM (node) updates as well?
    • Then begin to think along the lines of Live Deploys. Prototype locally and once it works, migrate to production immediately
    • Convert Kubernetes manifests to Helm Charts (optional, based on energy)
  • Go live! Git is source of truth. Two repos.
    • One for the Main node and its update
      • Terraform will provision node and install package, setup firewall
      • Figure out how to get Terraform to get the node talking to the git forge
      • Structure repo, copy every thing node related there, and make sure stuff gets updated periodically and if possible, idempotantly, via ansible pull and a systemd timer
    • The other one for k3s and flux
      • Convert everything I have done locally to run on prod. Add more steps as you do them below

Unrelated. Long term. Optional. Just here so that I remember

  • Get Moi publish script running
  • Redo Huginn Scenarios


Feedback on this post?
Mail me at feedback at this domain.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!


December 19, 2025 12:30 PM

Is Kubernetes ServiceAccount a JWT token? And how to verify it?

(More of my “notes for self”, as I continue reading the thesis paper, Usable Access Control in Cloud Management Systems, written by Lucas Käldström.)

I ran another small experiment today.

Today, I learnt that the Kubernetes Service Account tokens that I use very often to authenticate with the API server (using Authorization: Bearer <token> header with the HTTP request) are JWT (JSON Web Token) tokens.

I learnt this as a verbal fact first, so, I wanted to verify it in my mighty Kind cluster.

So, let’s first create a Kind cluster.

❯ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.34.0) 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊

Create a test pod and exec inside the container.

❯ kubectl run jwt-test \
   --image=busybox \
   --restart=Never \
   -- sleep 3600
pod/jwt-test created

❯ kubectl exec -it jwt-test -- sh
/ # 

Now, I’m inside our container, let’s run some tests.

First, print the contents of the /var/run/secrets/kubernetes.io/serviceaccount/token file.

/ # cat /var/run/secrets/kubernetes.io/serviceaccount/token

eyJhbGciOiJSUzI1NiIsImtpZCI6IncwY3FpcXhvZGt1SFlGelNQa1FwenFMcmpoeEFkVi1McjFYcTZVTEh3X1kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzk3Njg0MTc3LCJpYXQiOjE3NjYxNDgxNzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMGQyMTFiNGEtNjVjMi00ODEyLWIwYjEtNGUzY2I2NzI5ZGMzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoia2luZC1jb250cm9sLXBsYW5lIiwidWlkIjoiNWU0MmM4M2YtMmI2NC00ZjU3LWEyZGMtMjI3M2ZmZjk3ZTBlIn0sInBvZCI6eyJuYW1lIjoiand0LXRlc3QiLCJ1aWQiOiJlNDdmMDVlZi00MWMzLTRmNDctYTdmNC01MDc1ZmIzZGQ2ZDMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI4Y2ZhNWYxNS0wOWJhLTRmM2QtODE2Ny02OGFhNjE5ZjRmN2YifSwid2FybmFmdGVyIjoxNzY2MTUxNzg0fSwibmJmIjoxNzY2MTQ4MTc3LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWZhdWx0In0.PA010UKl5ldQCOAk5s-iNRHEsbxkyIscTUsNn1c3hE9TL-uTCTl7_7QnI8-NOmx5Qjj7GPvF2QaHCeynOXlLq-Nt5mcvnOb6IipTfcH0Mfa7OCBufgPo82ggUA7T09kwcs7pmxZoL_lHxBBElFOMl9cMyhYO7I46JZ_AmvmzO4ctD3_ojQ6cyciXx4YZt78IwbM9QdM24e64BjyI_rdCGk3Y8990zodydn447VP9V6UAVQJJV49eleUnWMnQHTc3Z8UGjmawLeSaDQTqxXQ_fr9YTpHwbA_MqmXggFAmVIVQo0hTjfZxtcxuJe-8mM69Lm9krNJ7PsEuQeUB_9WyxA

Now base64 decode the above token I got.

echo "eyJhbGciOiJSUzI1NiIsImtpZCI6IncwY3FpcXhvZGt1SFlGelNQa1FwenFMcmpoeEFkVi1McjFYcTZVTEh3X1kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzk3Njg0MTc3LCJpYXQiOjE3NjYxNDgxNzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMGQyMTFiNGEtNjVjMi00ODEyLWIwYjEtNGUzY2I2NzI5ZGMzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoia2luZC1jb250cm9sLXBsYW5lIiwidWlkIjoiNWU0MmM4M2YtMmI2NC00ZjU3LWEyZGMtMjI3M2ZmZjk3ZTBlIn0sInBvZCI6eyJuYW1lIjoiand0LXRlc3QiLCJ1aWQiOiJlNDdmMDVlZi00MWMzLTRmNDctYTdmNC01MDc1ZmIzZGQ2ZDMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI4Y2ZhNWYxNS0wOWJhLTRmM2QtODE2Ny02OGFhNjE5ZjRmN2YifSwid2FybmFmdGVyIjoxNzY2MTUxNzg0fSwibmJmIjoxNzY2MTQ4MTc3LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWZhdWx0In0.PA010UKl5ldQCOAk5s-iNRHEsbxkyIscTUsNn1c3hE9TL-uTCTl7_7QnI8-NOmx5Qjj7GPvF2QaHCeynOXlLq-Nt5mcvnOb6IipTfcH0Mfa7OCBufgPo82ggUA7T09kwcs7pmxZoL_lHxBBElFOMl9cMyhYO7I46JZ_AmvmzO4ctD3_ojQ6cyciXx4YZt78IwbM9QdM24e64BjyI_rdCGk3Y8990zodydn447VP9V6UAVQJJV49eleUnWMnQHTc3Z8UGjmawLeSaDQTqxXQ_fr9YTpHwbA_MqmXggFAmVIVQo0hTjfZxtcxuJe-8mM69Lm9krNJ7PsEuQeUB_9WyxA" | base64 -d

{"alg":"RS256","kid":"w0cqiqxodkuHYFzSPkQpzqLrjhxAdV-Lr1Xq6ULHw_Y"}
base64: invalid input

Ah, I got an invalid input.

But what I learnt is that a JWT token is a three part thing.
Each part is a Base64 encoded blob, separated (or joined by a dot).
So, a full token will look something like:

<base64url(header)>.<base64url(payload)>.<base64url(signature)>

What happened in our first attempt at decoding the ServiceAccount token is -
it just decoded the first blob and then reached a dot, and failed there.

So, if I divide the above token into 3 parts now, it will be:

Part 1:

eyJhbGciOiJSUzI1NiIsImtpZCI6IncwY3FpcXhvZGt1SFlGelNQa1FwenFMcmpoeEFkVi1McjFYcTZVTEh3X1kifQ

Part 2:

eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzk3Njg0MTc3LCJpYXQiOjE3NjYxNDgxNzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMGQyMTFiNGEtNjVjMi00ODEyLWIwYjEtNGUzY2I2NzI5ZGMzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoia2luZC1jb250cm9sLXBsYW5lIiwidWlkIjoiNWU0MmM4M2YtMmI2NC00ZjU3LWEyZGMtMjI3M2ZmZjk3ZTBlIn0sInBvZCI6eyJuYW1lIjoiand0LXRlc3QiLCJ1aWQiOiJlNDdmMDVlZi00MWMzLTRmNDctYTdmNC01MDc1ZmIzZGQ2ZDMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI4Y2ZhNWYxNS0wOWJhLTRmM2QtODE2Ny02OGFhNjE5ZjRmN2YifSwid2FybmFmdGVyIjoxNzY2MTUxNzg0fSwibmJmIjoxNzY2MTQ4MTc3LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWZhdWx0In0

Part 3:

PA010UKl5ldQCOAk5s-iNRHEsbxkyIscTUsNn1c3hE9TL-uTCTl7_7QnI8-NOmx5Qjj7GPvF2QaHCeynOXlLq-Nt5mcvnOb6IipTfcH0Mfa7OCBufgPo82ggUA7T09kwcs7pmxZoL_lHxBBElFOMl9cMyhYO7I46JZ_AmvmzO4ctD3_ojQ6cyciXx4YZt78IwbM9QdM24e64BjyI_rdCGk3Y8990zodydn447VP9V6UAVQJJV49eleUnWMnQHTc3Z8UGjmawLeSaDQTqxXQ_fr9YTpHwbA_MqmXggFAmVIVQo0hTjfZxtcxuJe-8mM69Lm9krNJ7PsEuQeUB_9WyxA

Ok, now let’s decode them one by one again!

Decode Part 1:

echo "eyJhbGciOiJSUzI1NiIsImtpZCI6IncwY3FpcXhvZGt1SFlGelNQa1FwenFMcmpoeEFkVi1McjFYcTZVTEh3X1kifQ" | base64 -d | jq .
{
  "alg": "RS256",
  "kid": "w0cqiqxodkuHYFzSPkQpzqLrjhxAdV-Lr1Xq6ULHw_Y"
}

The first key-value pair "alg": "RS256" in the output tells us, that this JWT token (which I will get in the next Part 2 decoding) was signed using a RS256 (RSA + SHA-256) algorithm.
So, if I need to verify the token signature, I know what is the algorithm used to sign it.

And the second key-value pair, the "kid": "w0cqiqxodkuHYFzSPkQpzqLrjhxAdV-Lr1Xq6ULHw_Y" part.
This is a hint to figure out which RS256 (RSA + SHA-256) private/pubic key pair was actually used to sign.
The value I see in front of kid is going to help us to figure out the public key of this pair.

But where to find the Public Key? That information, I will get in the next part.


Decode Part 2:

echo "eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzk3Njg0MTc3LCJpYXQiOjE3NjYxNDgxNzcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMGQyMTFiNGEtNjVjMi00ODEyLWIwYjEtNGUzY2I2NzI5ZGMzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoia2luZC1jb250cm9sLXBsYW5lIiwidWlkIjoiNWU0MmM4M2YtMmI2NC00ZjU3LWEyZGMtMjI3M2ZmZjk3ZTBlIn0sInBvZCI6eyJuYW1lIjoiand0LXRlc3QiLCJ1aWQiOiJlNDdmMDVlZi00MWMzLTRmNDctYTdmNC01MDc1ZmIzZGQ2ZDMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI4Y2ZhNWYxNS0wOWJhLTRmM2QtODE2Ny02OGFhNjE5ZjRmN2YifSwid2FybmFmdGVyIjoxNzY2MTUxNzg0fSwibmJmIjoxNzY2MTQ4MTc3LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWZhdWx0In0" | base64 -d | jq .

{
  "aud": [
    "https://kubernetes.default.svc.cluster.local"
  ],
  "exp": 1797684177,
  "iat": 1766148177,
  "iss": "https://kubernetes.default.svc.cluster.local",
  "jti": "0d211b4a-65c2-4812-b0b1-4e3cb6729dc3",
  "kubernetes.io": {
    "namespace": "default",
    "node": {
      "name": "kind-control-plane",
      "uid": "5e42c83f-2b64-4f57-a2dc-2273fff97e0e"
    },
    "pod": {
      "name": "jwt-test",
      "uid": "e47f05ef-41c3-4f47-a7f4-5075fb3dd6d3"
    },
    "serviceaccount": {
      "name": "default",
      "uid": "8cfa5f15-09ba-4f3d-8167-68aa619f4f7f"
    },
    "warnafter": 1766151784
  },
  "nbf": 1766148177,
  "sub": "system:serviceaccount:default:default"
}

Ok, this is the JWT token.

  • The "iss": "https://kubernetes.default.svc.cluster.local" is what issued this JWT token. So, it’s the Issuer.

    This is what I will use (later) to figure out the location of the public key.

  • The "aud": ["https://kubernetes.default.svc.cluster.local"] tells us that who is the intended audience for this token.

  • The "sub": "system:serviceaccount:default:default" tells us who is the subject of this token.

    As in this token represents this service account (called default in the default namespace).

  • The "iat": 1766148177 part stands for Issued-at, and so the value is a timestamp for when this token was issued (in unix format).

  • The "nbf": 1766148177 part stands for “not before” meaning this token can’t be used before this time.

    In this example, it is matching the “Issued-at” time, but I’m assuming it can be configured (I don’t know how at this point).

  • The "exp": 1797684177, part stands for “Expiration”, basically again it is a timestamp for when this token will expire.

  • The "jti": "0d211b4a-65c2-4812-b0b1-4e3cb6729dc3" is a unique ID for this JWT token.

  • And then the following part is a Kubernetes-specific entity, not really a standard JWT token field.

    In this case, it’s giving Kubernetes some metadata information, for which namespace, and which particular instances of node, pod, etc are tied to this ServiceAccount.

    And the warnafter bit tells when to automatically rotate this token.

        "kubernetes.io": {
          "namespace": "default",
          "node": {
            "name": "kind-control-plane",
            "uid": "5e42c83f-2b64-4f57-a2dc-2273fff97e0e"
          },
          "pod": {
            "name": "jwt-test",
            "uid": "e47f05ef-41c3-4f47-a7f4-5075fb3dd6d3"
          },
          "serviceaccount": {
            "name": "default",
            "uid": "8cfa5f15-09ba-4f3d-8167-68aa619f4f7f"
          },
          "warnafter": 1766151784
        },
    

And finally, the Part 3 now:

echo "PA010UKl5ldQCOAk5s-iNRHEsbxkyIscTUsNn1c3hE9TL-uTCTl7_7QnI8-NOmx5Qjj7GPvF2QaHCeynOXlLq-Nt5mcvnOb6IipTfcH0Mfa7OCBufgPo82ggUA7T09kwcs7pmxZoL_lHxBBElFOMl9cMyhYO7I46JZ_AmvmzO4ctD3_ojQ6cyciXx4YZt78IwbM9QdM24e64BjyI_rdCGk3Y8990zodydn447VP9V6UAVQJJV49eleUnWMnQHTc3Z8UGjmawLeSaDQTqxXQ_fr9YTpHwbA_MqmXggFAmVIVQo0hTjfZxtcxuJe-8mM69Lm9krNJ7PsEuQeUB_9WyxA" | base64 -d
5�B��W�$�base64: invalid input

I got some some random binary code here.

This I learnt is a raw RSA signature, that is used to sign the first 2 parts (Header and Payload) of the token.

I learnt it’s something like this:

RSA-SIGN(
  SHA256(
    base64url(header) + "." + base64url(payload)
  )
)

where to find the Public key (used to sign the JWT token)?

I saw in Part 2 decoded output this entry about who issued the token.

"iss": "https://kubernetes.default.svc.cluster.local",

Let’s see if I can find out some information from this Issuer url.

But one thing to note, before I make any request.

The Issuer of a JWT token stores the information that I am looking for at a path:

https://<url-of-the-issuer>/.well-known/openid-configuration

Now, back to our pod container.

❯ kubectl exec -it jwt-test -- sh
/ # wget https://kubernetes.default.svc.cluster.local
Connecting to kubernetes.default.svc.cluster.local (10.96.0.1:443)
wget: note: TLS certificate validation not implemented
wget: server returned error: HTTP/1.1 403 Forbidden

ok, when I tried to hit the https://kubernetes.default.svc.cluster.local url, I got 403 Forbidden.

So, I need some credentials.

You know what, here is what the Service Account token is used for.

I will pass the token as a Autherization header in our request.

Let’s try again.

/ # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

/ # wget --header="Authorization: Bearer $TOKEN" https://kubernetes.default.svc.cluster.local/.well-known/openid-configuration --no-check-certificate

Connecting to kubernetes.default.svc.cluster.local (10.96.0.1:443)
saving to 'openid-configuration'
openid-configuration 100% |********************************************************************************************|   236  0:00:00 ETA
'openid-configuration' saved

/ # cat openid-configuration | jq .
{
  "issuer": "https://kubernetes.default.svc.cluster.local",
  "jwks_uri": "https://172.20.0.2:6443/openid/v1/jwks",
  "response_types_supported": [
    "id_token"
  ],
  "subject_types_supported": [
    "public"
  ],
  "id_token_signing_alg_values_supported": [
    "RS256"
  ]
}

Ok, look at this part - "jwks_uri": "https://172.20.0.2:6443/openid/v1/jwks".
The “jwks” in the jwks_url stands for “JSON Web Key Set”.
This is what holds a collection of cryptographic public keys, used primarily for verifying digital signatures on JWT tokens.

So, I have almost gotten what I need.
Let’s hit this JWKS URL now.

/ # wget --header="Authorization: Bearer $TOKEN" https://172.20.0.2:6443/openid/v1/jwks --no-check-certificate
Connecting to 172.20.0.2:6443 (172.20.0.2:6443)
saving to 'jwks'
jwks                 100% |********************************************************************************************|   462  0:00:00 ETA
'jwks' saved

/# cat jwks | jq .
{
  "keys": [
    {
      "use": "sig",
      "kty": "RSA",
      "kid": "w0cqiqxodkuHYFzSPkQpzqLrjhxAdV-Lr1Xq6ULHw_Y",
      "alg": "RS256",
      "n": "vKsQjvpHQWbez2dLiTb2aJp36SKpVWvk-egE1pRertMJmtq3eeDPskb8n_msAWY4GKIMx3RnmfKBMbs_WHAkVt681cH0AzF5CR_oUtJ0Unde1rInUls5nxQcQ7_cCjApyKQlY5x5Z_vASyh7fOvMKUWmfLJt7M20hDoEvlM0WF9kUeqAgexBXlFv106qc-3CoO2-HPN6mlOn8WqHd-Ky_jQaj5xm__A0o04H7JEu09n7_Z9Rws9TFqBHaGCXwio3cozh2Bjv6da7rmyZUSp7ztH_4UcfYQgt5iJnxUdsjD7vXnyWFwvefs-6Wn6vlRp4fVmCfNrkzDL7QPWsjJJoWQ",
      "e": "AQAB"
    }
  ]
}

Voila, I got it.

See, the kid and alg matches exactly what I got in the decoded output of Part 1 of the JWT token.

{
  "alg": "RS256",
  "kid": "w0cqiqxodkuHYFzSPkQpzqLrjhxAdV-Lr1Xq6ULHw_Y"
}

So, I think the last part left for us now is to understand how can i use this to verify the signature now on the token?

let’s verify the token!

So, to verify the token. I will need the following bit I got from the jwks_uri output.

"n": "vKsQjvpHQWbez2dLiTb2aJp36SKpVWvk-egE1pRertMJmtq3eeDPskb8n_msAWY4GKIMx3RnmfKBMbs_WHAkVt681cH0AzF5CR_oUtJ0Unde1rInUls5nxQcQ7_cCjApyKQlY5x5Z_vASyh7fOvMKUWmfLJt7M20hDoEvlM0WF9kUeqAgexBXlFv106qc-3CoO2-HPN6mlOn8WqHd-Ky_jQaj5xm__A0o04H7JEu09n7_Z9Rws9TFqBHaGCXwio3cozh2Bjv6da7rmyZUSp7ztH_4UcfYQgt5iJnxUdsjD7vXnyWFwvefs-6Wn6vlRp4fVmCfNrkzDL7QPWsjJJoWQ",
"e": "AQAB"

Note: all notes this point onwards is me copy/pasting instructions I got from docs or otherwise tinkering with AI.

The n and e are respectively called the “modulus” and “public exponent” which is what I will use to contruct the RSA public key.

Below mathematics is what is used to convert the “n” and “e” to a public key.

What the following process does is 5 things for the “n” and “e” values I got:

  • convert them from base64url to base64 to binary.
  • then interpret them as Integers
  • then wrap them into something called ASN.1 structure
  • then do something called DER (Distinguished Encoding Rules) encoding this above ASN.1 structure
  • and finall wrap that DER into a PEM.

Once I got the PEM version, at that point, openssl will be able to use it.

I’m doing the below steps on my host machine, because i need openssl, base64, xxd, and jq, which are not present in the container.

Ok, first step - we decode n and e to binary, replacing the following and adding padding (=):

  • - to +
  • _ to /
n="vKsQjvpHQWbez2dLiTb2aJp36SKpVWvk-egE1pRertMJmtq3eeDPskb8n_msAWY4GKIMx3RnmfKBMbs_WHAkVt681cH0AzF5CR_oUtJ0Unde1rInUls5nxQcQ7_cCjApyKQlY5x5Z_vASyh7fOvMKUWmfLJt7M20hDoEvlM0WF9kUeqAgexBXlFv106qc-3CoO2-HPN6mlOn8WqHd-Ky_jQaj5xm__A0o04H7JEu09n7_Z9Rws9TFqBHaGCXwio3cozh2Bjv6da7rmyZUSp7ztH_4UcfYQgt5iJnxUdsjD7vXnyWFwvefs-6Wn6vlRp4fVmCfNrkzDL7QPWsjJJoWQ"echo $n | tr '_-' '/+' | base64 -d > n.bin

❯ e="AQAB"echo $e | base64 -d > e.bin

ok, now do:

❯ xxd e.bin
00000000: 0100 01                                  ...

This represents the number 65537.

Now, we need to convert these 2 n.bin and e.bin binaries to Hexadecimal strings.

❯ xxd -p n.bin | tr -d '\n'
bcab108efa474166decf674b8936f6689a77e922a9556be4f9e804d6945eaed3099adab779e0cfb246fc9ff9ac01663818a20cc7746799f28131bb3f58702456debcd5c1f4033179091fe852d27452775ed6b227525b399f141c43bfdc0a3029c8a425639c7967fbc04b287b7cebcc2945a67cb26deccdb4843a04be5334585f6451ea8081ec415e516fd74eaa73edc2a0edbe1cf37a9a53a7f16a8777e2b2fe341a8f9c66fff034a34e07ec912ed3d9fbfd9f51c2cf5316a047686097c22a37728ce1d818efe9d6bbae6c99512a7bced1ffe1471f61082de62267c5476c8c3eef5e7c96170bde7ecfba5a7eaf951a787d59827cdae4cc32fb40f5ac8c926859

❯ xxd -p e.bin
010001

Now, we need to convert these into an ASN.1 format.

so, we create a file called rsa.asn1, with the following contents:

asn1=SEQUENCE:rsa_key

[rsa_key]
modulus=INTEGER:0x<PASTE_HEX_OF_N_HERE>
publicExponent=INTEGER:0x<PASTE_HEX_OF_E_HERE>

so, the final version would look something like:

cat rsa.asn1 
asn1=SEQUENCE:rsa_key

[rsa_key]
modulus=INTEGER:0xbcab108efa474166decf674b8936f6689a77e922a9556be4f9e804d6945eaed3099adab779e0cfb246fc9ff9ac01663818a20cc7746799f28131bb3f58702456debcd5c1f4033179091fe852d27452775ed6b227525b399f141c43bfdc0a3029c8a425639c7967fbc04b287b7cebcc2945a67cb26deccdb4843a04be5334585f6451ea8081ec415e516fd74eaa73edc2a0edbe1cf37a9a53a7f16a8777e2b2fe341a8f9c66fff034a34e07ec912ed3d9fbfd9f51c2cf5316a047686097c22a37728ce1d818efe9d6bbae6c99512a7bced1ffe1471f61082de62267c5476c8c3eef5e7c96170bde7ecfba5a7eaf951a787d59827cdae4cc32fb40f5ac8c926859
publicExponent=INTEGER:0x010001

With this ASN.1 format available now, we can generate a DER encoded value.

❯ openssl asn1parse \
   -genconf rsa.asn1 \
   -out rsa_pub.der
   
    0:d=0  hl=4 l= 266 cons: SEQUENCE          
    4:d=1  hl=4 l= 257 prim: INTEGER           :BCAB108EFA474166DECF674B8936F6689A77E922A9556BE4F9E804D6945EAED3099ADAB779E0CFB246FC9FF9AC01663818A20CC7746799F28131BB3F58702456DEBCD5C1F4033179091FE852D27452775ED6B227525B399F141C43BFDC0A3029C8A425639C7967FBC04B287B7CEBCC2945A67CB26DECCDB4843A04BE5334585F6451EA8081EC415E516FD74EAA73EDC2A0EDBE1CF37A9A53A7F16A8777E2B2FE341A8F9C66FFF034A34E07EC912ED3D9FBFD9F51C2CF5316A047686097C22A37728CE1D818EFE9D6BBAE6C99512A7BCED1FFE1471F61082DE62267C5476C8C3EEF5E7C96170BDE7ECFBA5A7EAF951A787D59827CDAE4CC32FB40F5AC8C926859
  265:d=1  hl=2 l=   3 prim: INTEGER           :010001

Before we move ahead, I want to show why we are doing all this.

Because, the RSA Public key will be of this structure:

RSAPublicKey ::= SEQUENCE {
  modulus           INTEGER (n),
  publicExponent    INTEGER (e)
}

so, if you see the output of our DER encoding command, we see somethings that are needed in the above RSA Public Key structure.

In the output, we have a SEQUENCE consisting of two prime INTEGERS.
That’s exactly what we need.

Now, we are almost close to the final part, of actually converting the “n” and “e” to a RSA Public key.

❯ openssl rsa \
   -pubin \
   -inform DER \
   -in rsa_pub.der \
   -outform PEM \
   -out public.pem
writing RSA key

❯ cat public.pem 
-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvKsQjvpHQWbez2dLiTb2
aJp36SKpVWvk+egE1pRertMJmtq3eeDPskb8n/msAWY4GKIMx3RnmfKBMbs/WHAk
Vt681cH0AzF5CR/oUtJ0Unde1rInUls5nxQcQ7/cCjApyKQlY5x5Z/vASyh7fOvM
KUWmfLJt7M20hDoEvlM0WF9kUeqAgexBXlFv106qc+3CoO2+HPN6mlOn8WqHd+Ky
/jQaj5xm//A0o04H7JEu09n7/Z9Rws9TFqBHaGCXwio3cozh2Bjv6da7rmyZUSp7
ztH/4UcfYQgt5iJnxUdsjD7vXnyWFwvefs+6Wn6vlRp4fVmCfNrkzDL7QPWsjJJo
WQIDAQAB
-----END PUBLIC KEY-----

Hurrah! Hurrah! Hurrah! We have the public key! finally!

ok, but let’s verify once

❯ openssl rsa -pubin -in public.pem -text -noout
Public-Key: (2048 bit)
Modulus:
    00:bc:ab:10:8e:fa:47:41:66:de:cf:67:4b:89:36:
    f6:68:9a:77:e9:22:a9:55:6b:e4:f9:e8:04:d6:94:
    5e:ae:d3:09:9a:da:b7:79:e0:cf:b2:46:fc:9f:f9:
    ac:01:66:38:18:a2:0c:c7:74:67:99:f2:81:31:bb:
    3f:58:70:24:56:de:bc:d5:c1:f4:03:31:79:09:1f:
    e8:52:d2:74:52:77:5e:d6:b2:27:52:5b:39:9f:14:
    1c:43:bf:dc:0a:30:29:c8:a4:25:63:9c:79:67:fb:
    c0:4b:28:7b:7c:eb:cc:29:45:a6:7c:b2:6d:ec:cd:
    b4:84:3a:04:be:53:34:58:5f:64:51:ea:80:81:ec:
    41:5e:51:6f:d7:4e:aa:73:ed:c2:a0:ed:be:1c:f3:
    7a:9a:53:a7:f1:6a:87:77:e2:b2:fe:34:1a:8f:9c:
    66:ff:f0:34:a3:4e:07:ec:91:2e:d3:d9:fb:fd:9f:
    51:c2:cf:53:16:a0:47:68:60:97:c2:2a:37:72:8c:
    e1:d8:18:ef:e9:d6:bb:ae:6c:99:51:2a:7b:ce:d1:
    ff:e1:47:1f:61:08:2d:e6:22:67:c5:47:6c:8c:3e:
    ef:5e:7c:96:17:0b:de:7e:cf:ba:5a:7e:af:95:1a:
    78:7d:59:82:7c:da:e4:cc:32:fb:40:f5:ac:8c:92:
    68:59
Exponent: 65537 (0x10001)

We got the Public Key! Let’s verify the JWT token now.

Ok, I’m back to doing things by myself now.

Remember from the Part 1 and Part 2 of the token were actually the “Header” and “Payload”. And “Part3”, the signature.

What we have to verify is Part 1 and Part 2, which is what is signed by Part 3.

So, let’s sort out the the required parts.

TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6IncwY3FpcXhvZGt1SFlGelNQa1FwenFMcmpoeEFkVi1McjFYcTZVTEh3X1kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzk3Njg3ODgxLCJpYXQiOjE3NjYxNTE4ODEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiNmJiNzc0MjMtNmE3Yi00MjBmLTg1MDgtOTUzY2Y0YmE5MWZhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwibm9kZSI6eyJuYW1lIjoia2luZC1jb250cm9sLXBsYW5lIiwidWlkIjoiNWU0MmM4M2YtMmI2NC00ZjU3LWEyZGMtMjI3M2ZmZjk3ZTBlIn0sInBvZCI6eyJuYW1lIjoiand0LXRlc3QiLCJ1aWQiOiI4M2E5YmE4OC0yOTg5LTRlYmItYTRiZS04ZjA0ODY2ZmM4OGEifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI4Y2ZhNWYxNS0wOWJhLTRmM2QtODE2Ny02OGFhNjE5ZjRmN2YifSwid2FybmFmdGVyIjoxNzY2MTU1NDg4fSwibmJmIjoxNzY2MTUxODgxLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZWZhdWx0In0.ByKLRb2376aYeAmOWL1LTHsRmwgjWYp3kklUmoDcAzfMXWOBOcU_R4iXC4UqM5iwpEw_lWhDgEndGghiUg6HFau0rtj5VxFiFWwXkxSfzYNvxzW_nO3uFZlI4R3tWJqIeLMX3hqaWQGb_LvfvjI1My6XNZhkt8UTByUg3nKTmtMWmg-9-XKMylmD078vT4n8f0nSL6YlchJuTFivWc1lGE-FrZmWk6WeiRtH_jTyXp95dhg_Chf566otezUrPE8ern-8sI0rSVDzvLNsF4YvL9IXx2JQn57QR_Pr3otFXpeUTgj5oBUllsCTrA2xpXRmWxUD9qoncjviVkAkcj1fiw

❯ echo -n "$TOKEN" | cut -d. -f1,2 > signed-data.txt

❯ echo "$TOKEN" | cut -d. -f3 | tr '_-' '/+' | base64 -d > signature.bin

ok, now, we verify the signed-data.txt with the signature.bin

❯ openssl dgst -sha256 \-verify public.pem \-signature signature.bin \
∙   signed-data.txt
Verification failure
40F7505CB27F0000:error:02000068:rsa routines:ossl_rsa_verify:bad signature:crypto/rsa/rsa_sign.c:442:
40F7505CB27F0000:error:1C880004:Provider routines:rsa_verify_directly:RSA lib:providers/implementations/signature/rsa_sig.c:1043:

and I failed. 😂

Goodness, the process was tiring, so, I’m not repeating it right now.

I’ll come back to it and see at what place, I made mistakes.

But this effort was to learn how it is done, i.e., how a signature verification happens.

And also, I should not forget it all started with me just trying to understand whether a Kubernetes Service Account is a JWT token.

I got my answer and I learnt so much more.

There’s an article that I haven’t read just yet, but I was recommended to read (by Lucas Käldström).
Because, we didn’t end with a shiny green “verified” message, I will leave you to read that brilliant article -
RSA Signing is Not RSA Decryption

Thank you, if you followed so far. :)

December 19, 2025 12:00 AM

Understanding kubeadm Bootstrap Tokens (through Node Bootstrapping)

(This blog is basically a set of notes for self, as I read and try to understand the thesis paper, Usable Access Control in Cloud Management Systems, written by Lucas Käldström.)

For a good while now, I have been wanting to understand how a node joins a Kubernetes cluster using the kubeadm join command.
And especially, the part about the symmetric token (that we generate or we get from the control plane node after a successful kubeadm init run, and then we share it with all other nodes wanting to join the cluster).

So, things I want to understand are:

  • why there’s a symmetric token?
  • and what is the role of this token?
  • and can I try to create a kubeadm token manually and use that to join an existing Kubernetes cluster successfully?

And as I read more and more of the thesis paper (linked at the top), I am understanding that the Access control mechanism(s) (for authentication, and authorization, and admission) used within the Kubernetes clusters are extremly elaborate and thought out from security perspective (and ofcourse, they’re not simple at all, not right away at least).
So, seeing a simple token passed through the command line in plain text format felt a bit off the place (or too simple right away).

And ofcourse, I have a feeling, that it is not. So, following is me trying to figure out some answers for my questions.


Q: What is the role of a kubeadm bootstrap token?

I created a simple Kind cluster, and looked for anything “bootstrap” related on the kube-apiserver-* pod.

❯ kind create cluster

❯ kubectl get pod kube-apiserver-kind-control-plane -n kube-system -o yaml | grep "bootstrap"

   - --enable-bootstrap-token-auth=true    

I got a flag in the output, --enable-bootstrap-token-auth=true.
And I don’t know what this flag actually does.

Now, let’s create a simple docker container with the kube-apiserver image and look at the kube-apiserver --help menu to get the definition of the flag.

❯ kubectl get pod kube-apiserver-kind-control-plane -n kube-system -o yaml | grep "image:"
   image: registry.k8s.io/kube-apiserver:v1.34.0
   image: registry.k8s.io/kube-apiserver-amd64:v1.34.0

❯ docker run -it --rm --entrypoint="" registry.k8s.io/kube-apiserver:v1.34.0 kube-apiserver --help | grep "bootstrap"

     --enable-bootstrap-token-auth                       Enable to allow secrets of type 'bootstrap.kubernetes.io/token' in the 'kube-system' namespace to be used for TLS bootstrapping authentication.

In simple words, how I understand it as:

  • If I set the flag --enable-bootstrap-token-auth to True, then the API server is configured to trust a list of (kubeadm) bootstrap tokens.
  • These tokens needs to be stored as a Secret object (of a very specific type - bootstrap.kubernetes.io/token) in the kube-system namespace.
  • and once that is done, then if a joining node (actually, the kubelet running on the joining node) makes a “TLS bootstrapping authentication” request using the “bootstrap token” in the request header (Authorization: Bearer <token>), then the request will be authenticated.

So, we have some information now about the kubeadm bootstrap token.

But you know what, kubeadm itself explains the role of the token much better:

❯ root@kind-control-plane:/# kubeadm token --help

This command manages bootstrap tokens. It is optional and needed only for advanced use cases.

In short, bootstrap tokens are used for establishing bidirectional trust between a client and a server.
A bootstrap token can be used when a client (for example a node that is about to join the cluster) needs
to trust the server it is talking to. Then a bootstrap token with the "signing" usage can be used.
bootstrap tokens can also function as a way to allow short-lived authentication to the API Server
(the token serves as a way for the API Server to trust the client), for example for doing the TLS Bootstrap.

What is a bootstrap token more exactly?
 - It is a Secret in the kube-system namespace of type "bootstrap.kubernetes.io/token".
 - A bootstrap token must be of the form "[a-z0-9]{6}.[a-z0-9]{16}". The former part is the public token ID,
   while the latter is the Token Secret and it must be kept private at all circumstances!
 - The name of the Secret must be named "bootstrap-token-(token-id)".

You can read more about bootstrap tokens here:
  https://kubernetes.io/docs/admin/bootstrap-tokens/

Also, the link at the bottom doesn’t work anymore.
Correct one (atleast as of writing) is - kubeadm token.

So, at this point we have a verbal answer for the question - what is the role of the kubeadm token?.

Next, I want to try is to create a manual kubeadm token (and now, from the kubeadm --help output, I also know what is the format of a valid kubeadm token).


I am continuing on the Kind cluster we created above.

The control-plane node (docker container) was created with the IP address - 172.18.0.2.
(leaving it here as a note, because we will use it later. And this is the future me talking, I didn’t know it as I was trying things.)

To simulate a new node, I didn’t use kind’s built-in multi-node support.
Instead, I created a plain docker container on the same network (the docker bridge network with the name, kind):

❯ docker run --rm -it --name joining-node \
  --privileged \
  --network kind \
  kindest/node:latest

❯ docker container ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED             STATUS             PORTS                       NAMES
83ab08acf723   kindest/node:latest    "/usr/local/bin/entr…"   7 seconds ago       Up 6 seconds                                   joining-node

This new container has nothing but the basic tools installed (kubeadm, kubelet, kubectl and a clean filesystem, coming from the kindest/node image).
So, as of now, no certificates, no kubeconfig, nothing.

Also note the name of the container, joining-node.
(I will use it later to exec inside the container and use it as a joining node).


Q: Can I create a kubeadm token manually and use it to join an existing Kubernetes cluster successfully?

Back to the rules we got earlier:

What is a bootstrap token more exactly?

  • It is a Secret in the kube-system namespace of type “bootstrap.kubernetes.io/token”.
  • A bootstrap token must be of the form “[a-z0-9]{6}.[a-z0-9]{16}”. The former part is the public token ID, while the latter is the Token Secret and it must be kept private at all circumstances!
  • The name of the Secret must be named “bootstrap-token-(token-id)”.

I am creating the following Secret object in the cluster (from the kind-control-plane node).
Notice the token bits I added in the yaml - token-id: pqrstu and token-secret: abcdef1234567890.
Together, these will give us the full token as pqrstu.abcdef1234567890.
We will use this to make our kubeadm join requests.

To come up with the following template, I referred to existing tokens from other multi-node Kind cluster.
(updated later: the template is also available here - Bootstrap Token Secret Format)

# bootstrap-token.yaml

apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-pqrstu
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  token-id: pqrstu
  token-secret: abcdef1234567890
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  expiration: "2030-01-01T00:00:00Z"

Apply to create the token secret:

❯ kubectl apply -f bootstrap-token.yaml

❯ kubectl get secrets -n kube-system

NAME                     TYPE                            DATA   AGE
bootstrap-token-abcdef   bootstrap.kubernetes.io/token   6      58m

Now let’s make some first attempts to make the new docker container node (joining-node) join the Kind cluster. (I have kept the verbosity of the logs very high).

Inside the new node docker container, I started by running kubeadm join with no arguments:

❯ docker exec -it joining-node /bin/bash

root@83ab08acf723:/# kubeadm join --v=9

I1216 07:00:01.797859     164 join.go:423] [preflight] found NodeName empty; using OS hostname as NodeName
I1216 07:00:01.798056     164 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
error: discovery: Invalid value: "": bootstrapToken or file must be set
no stack trace

That failed immediately.

Next, let’s try to give it the token we created above:

root@83ab08acf723:/# kubeadm join pqrstu.abcdef1234567890 --v=9

error: [discovery.bootstrapToken.caCertHashes: Invalid value: "": using token-based discovery without caCertHashes can be unsafe. Set unsafeSkipCAVerification as true in your kubeadm config file or pass --discovery-token-unsafe-skip-ca-verification flag to continue, discovery.bootstrapToken.token: Invalid value: "": the bootstrap token is invalid, discovery.bootstrapToken.apiServerEndpoint: Invalid value: "pqrstu.abcdef1234567890": address pqrstu.abcdef1234567890: missing port in address, discovery.tlsBootstrapToken: Invalid value: "": the bootstrap token is invalid]
no stack trace

That produced a more informative error logs.
Now, we atleast know that kubeadm expects an endpoint to the API server first and then the token (maybe).

Also the logs suggested us to pass --discovery-token-unsafe-skip-ca-verification flag to continue.
Let’s do that as well, to make some progress.

(and yes, I could have already looked at the kubeadm --help menu to understand the required format, but above test runs were intentional to understand basic flow).

Ok, let’s provide everything properly that kubeadm actually needs:

root@83ab08acf723:/# kubeadm join 172.18.0.2:6443 \
  --token="pqrstu.abcdef1234567890" \
  --discovery-token-unsafe-skip-ca-verification --v=9

This time things moved some further.

Let’s look at some important bits of the logs part by part.

230 token.go:229] [discovery] Waiting for the cluster-info ConfigMap to receive a JWS signature for token ID "pqrstu"
230 type.go:165] "Request Body" body=""
230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" 'https://172.18.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s'
 >
230 round_trippers.go:562] "HTTP Trace: Dial succeed" network="tcp" address="172.18.0.2:6443"
230 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s" status="200 OK" headers=<...>

From the above bits, it looks like at this point, we are able to successfully authenticate using the manually created token we passed (look at the status="200 OK").

But then after authentication, the API Server refused to let us continue further.

230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" -H "Authorization: Bearer <masked>" 'https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s'
 >

230 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s" status="403 Forbidden" headers=<...>

230 token.go:249] [discovery] Retrying due to error: could not find a JWS signature in the cluster-info ConfigMap for token ID "pqrstu"

unable to fetch the kubeadm-config ConfigMap:
configmaps "kubeadm-config" is forbidden:
User "system:bootstrap:pqrstu" cannot get resource "configmaps"
in namespace "kube-system"

And we are also able to see where is the problem - User "system:bootstrap:pqrstu" cannot get resource "configmaps" in namespace "kube-system".

So, we know now when we pass the token to the kubeadm join command, the API Server sees our requests as coming from the user system:bootstrap:pqrstu.

ok, let’s try to fix it now by giving it the required permissions.

We know that kubernetes uses RBAC to configure these kind of permissions on the kubernetes objects.
(and once again, I looked at another multi-node kind cluster to see what permissions we are missing and came up with the following template).

I created a ClusterRoleBinding object that allowed the user (system:bootstrap:pqrstu) to read cluster configuration.

AND PLEASE NOTE: This is not something I would do in a real cluster. This is purely just to move forward with this test!

# clusterrolebinding-bootstrap-token.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: allow-bootstrap-read-kubeadm-config
subjects:
- kind: User
  name: system:bootstrap:pqrstu
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Apply it to the cluster:

❯ kubectl apply -f clusterrolebinding-bootstrap-token.yaml 
clusterrolebinding.rbac.authorization.k8s.io/allow-bootstrap-read-kubeadm-config created

❯ kubectl get clusterrolebinding allow-bootstrap-read-kubeadm-config -n kube-system 
NAME                                  ROLE                        AGE
allow-bootstrap-read-kubeadm-config   ClusterRole/cluster-admin   49s

Now, we have configured more permissions for our user system:bootstrap:pqrstu.

Let’s re-run the same same command:

root@83ab08acf723:/# kubeadm join 172.18.0.2:6443 \
  --token="pqrstu.abcdef1234567890" \
  --discovery-token-unsafe-skip-ca-verification --v=9

Voila!
This time it worked successfully.
The kubeadm join ran successfully and retured - This node has joined the cluster.

I can confirm it from the Kind cluster’s control-plane node as well.
I can see this new docker container showing up as a node.

❯ kubectl get nodes 
NAME                 STATUS     ROLES           AGE    VERSION
83ab08acf723         NotReady   <none>          108s   v1.35.0-alpha.2.488+f35f9509a69cc6
kind-control-plane   Ready      control-plane   106m   v1.34.0
Please see the full logs here. These contain all the requests showing how and where the Kubeadm token is used, as request's headers. (click to expand)
root@83ab08acf723:/# kubeadm join 172.18.0.2:6443   --token="pqrstu.abcdef1234567890"   --discovery-token-unsafe-skip-ca-verification --v=9
I1215 14:50:15.160788     230 join.go:423] [preflight] found NodeName empty; using OS hostname as NodeName
I1215 14:50:15.161142     230 initconfiguration.go:122] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
[preflight] Running pre-flight checks
I1215 14:50:15.161266     230 preflight.go:93] [preflight] Running general checks
I1215 14:50:15.161321     230 checks.go:315] validating the existence of file /etc/kubernetes/kubelet.conf
I1215 14:50:15.161355     230 checks.go:315] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1215 14:50:15.161382     230 checks.go:89] validating the container runtime
I1215 14:50:15.167560     230 checks.go:120] validating the container runtime version compatibility
I1215 14:50:15.170013     230 checks.go:685] validating whether swap is enabled or not
	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
I1215 14:50:15.170160     230 checks.go:405] validating the presence of executable losetup
I1215 14:50:15.170232     230 checks.go:405] validating the presence of executable mount
I1215 14:50:15.170257     230 checks.go:405] validating the presence of executable cp
I1215 14:50:15.170278     230 checks.go:551] running system verification checks
I1215 14:50:15.231856     230 checks.go:436] checking whether the given node name is valid and reachable using net.LookupHost
I1215 14:50:15.232022     230 checks.go:651] validating kubelet version
I1215 14:50:15.276735     230 checks.go:165] validating if the "kubelet" service is enabled and active
I1215 14:50:15.294376     230 checks.go:238] validating availability of port 10250
I1215 14:50:15.294632     230 checks.go:315] validating the existence of file /etc/kubernetes/pki/ca.crt
I1215 14:50:15.294661     230 checks.go:465] validating if the connectivity type is via proxy or direct
I1215 14:50:15.294728     230 checks.go:364] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1215 14:50:15.294770     230 join.go:553] [preflight] Discovering cluster-info
I1215 14:50:15.294792     230 token.go:71] [discovery] Created cluster-info discovery client, requesting info from "172.18.0.2:6443"
I1215 14:50:15.294942     230 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
I1215 14:50:15.294959     230 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
I1215 14:50:15.294967     230 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1215 14:50:15.294976     230 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1215 14:50:15.294984     230 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1215 14:50:15.294991     230 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1215 14:50:15.295371     230 token.go:229] [discovery] Waiting for the cluster-info ConfigMap to receive a JWS signature for token ID "pqrstu"
I1215 14:50:15.295452     230 type.go:165] "Request Body" body=""
I1215 14:50:15.295539     230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" 'https://172.18.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s'
 >
I1215 14:50:15.295931     230 round_trippers.go:562] "HTTP Trace: Dial succeed" network="tcp" address="172.18.0.2:6443"
I1215 14:50:15.302680     230 round_trippers.go:632] "Response" verb="GET" url="https://172.18.0.2:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s" status="200 OK" headers=<
	Audit-Id: 6c3035a3-0824-415c-8e5d-0db271cdb29f
	Cache-Control: no-cache, private
	Content-Length: 2102
	Content-Type: application/vnd.kubernetes.protobuf
	Date: Mon, 15 Dec 2025 14:50:15 GMT
	X-Kubernetes-Pf-Flowschema-Uid: f54c5d84-c2d8-4cbb-8801-07adb48ce73d
	X-Kubernetes-Pf-Prioritylevel-Uid: 7df0c0a7-2f8a-40ca-9625-0b6635feccd2
 > milliseconds=7 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=4 serverProcessingMilliseconds=2
I1215 14:50:15.302857     230 type.go:165] "Response Body" body=<
	00000000  6b 38 73 00 0a 0f 0a 02  76 31 12 09 43 6f 6e 66  |k8s.....v1..Conf|
	00000010  69 67 4d 61 70 12 9a 10  0a 99 02 0a 0c 63 6c 75  |igMap........clu|
	00000020  73 74 65 72 2d 69 6e 66  6f 12 00 1a 0b 6b 75 62  |ster-info....kub|
	00000030  65 2d 70 75 62 6c 69 63  22 00 2a 24 38 34 37 38  |e-public".*$8478|
	00000040  65 64 38 35 2d 62 61 66  39 2d 34 63 63 61 2d 38  |ed85-baf9-4cca-8|
	00000050  31 65 32 2d 37 35 36 66  62 39 61 61 65 62 62 61  |1e2-756fb9aaebba|
	00000060  32 04 38 34 33 35 38 00  42 08 08 d7 da ff c9 06  |2.84358.B.......|
	00000070  10 00 8a 01 54 0a 07 6b  75 62 65 61 64 6d 12 06  |....T..kubeadm..|
	00000080  55 70 64 61 74 65 1a 02  76 31 22 08 08 d7 da ff  |Update..v1".....|
	00000090  c9 06 10 00 32 08 46 69  65 6c 64 73 56 31 3a 27  |....2.FieldsV1:'|
	000000a0  0a 25 7b 22 66 3a 64 61  74 61 22 3a 7b 22 2e 22  |.%{"f:data":{"."|
	000000b0  3a 7b 7d 2c 22 66 3a 6b  75 62 65 63 6f 6e 66 69  |:{},"f:kubeconfi|
	000000c0  67 22 3a 7b 7d 7d 7d 42  00 8a 01 68 0a 17 6b 75  |g":{}}}B...h..ku|
	000000d0  62 65 2d 63 6f 6e 74 72  6f 6c 6c 65 72 2d 6d 61  |be-controller-ma|
	000000e0  6e 61 67 65 72 12 06 55  70 64 61 74 65 1a 02 76  |nager..Update..v|
	000000f0  31 22 08 08 a8 87 80 ca  06 10 00 32 08 46 69 65  |1".........2.Fie|
	00000100  6c 64 73 56 31 3a 2b 0a  29 7b 22 66 3a 64 61 74  |ldsV1:+.){"f:dat|
	00000110  61 22 3a 7b 22 66 3a 6a  77 73 2d 6b 75 62 65 63  |a":{"f:jws-kubec|
	00000120  6f 6e 66 69 67 2d 70 71  72 73 74 75 22 3a 7b 7d  |onfig-pqrstu":{}|
	00000130  7d 7d 42 00 12 6e 0a 15  6a 77 73 2d 6b 75 62 65  |}}B..n..jws-kube|
	00000140  63 6f 6e 66 69 67 2d 70  71 72 73 74 75 12 55 65  |config-pqrstu.Ue|
	00000150  79 4a 68 62 47 63 69 4f  69 4a 49 55 7a 49 31 4e  |yJhbGciOiJIUzI1N|
	00000160  69 49 73 49 6d 74 70 5a  43 49 36 49 6e 42 78 63  |iIsImtpZCI6InBxc|
	00000170  6e 4e 30 64 53 4a 39 2e  2e 75 5f 4d 74 56 4f 42  |nN0dSJ9..u_MtVOB|
	00000180  69 68 65 4e 5a 37 5a 57  31 5f 5a 36 52 6d 67 78  |iheNZ7ZW1_Z6Rmgx|
	00000190  35 50 41 5f 69 76 6b 77  31 2d 62 46 53 35 49 50  |5PA_ivkw1-bFS5IP|
	000001a0  71 33 34 55 12 8b 0d 0a  0a 6b 75 62 65 63 6f 6e  |q34U.....kubecon|
	000001b0  66 69 67 12 fc 0c 61 70  69 56 65 72 73 69 6f 6e  |fig...apiVersion|
	000001c0  3a 20 76 31 0a 63 6c 75  73 74 65 72 73 3a 0a 2d  |: v1.clusters:.-|
	000001d0  20 63 6c 75 73 74 65 72  3a 0a 20 20 20 20 63 65  | cluster:.    ce|
	000001e0  72 74 69 66 69 63 61 74  65 2d 61 75 74 68 6f 72  |rtificate-author|
	000001f0  69 74 79 2d 64 61 74 61  3a 20 4c 53 30 74 4c 53  |ity-data: LS0tLS|
	00000200  31 43 52 55 64 4a 54 69  42 44 52 56 4a 55 53 55  |1CRUdJTiBDRVJUSU|
	00000210  5a 4a 51 30 46 55 52 53  30 74 4c 53 30 74 43 6b  |ZJQ0FURS0tLS0tCk|
	00000220  31 4a 53 55 52 43 56 45  4e 44 51 57 55 79 5a 30  |1JSURCVENDQWUyZ0|
	00000230  46 33 53 55 4a 42 5a 30  6c 4a 55 46 70 36 4e 31  |F3SUJBZ0lJUFp6N1|
	00000240  5a 44 52 7a 6c 4b 64 6b  31 33 52 46 46 5a 53 6b  |ZDRzlKdk13RFFZSk|
	00000250  74 76 57 6b 6c 6f 64 6d  4e 4f 51 56 46 46 54 45  |tvWklodmNOQVFFTE|
	00000260  4a 52 51 58 64 47 56 45  56 55 54 55 4a 46 52 30  |JRQXdGVEVUTUJFR0|
	00000270  45 78 56 55 55 4b 51 58  68 4e 53 32 45 7a 56 6d  |ExVUUKQXhNS2EzVm|
	00000280  6c 61 57 45 70 31 57 6c  68 53 62 47 4e 36 51 57  |laWEp1WlhSbGN6QW|
	00000290  56 47 64 7a 42 35 54 6c  52 46 65 55 31 55 56 58  |VGdzB5TlRFeU1UVX|
	000002a0  68 4e 56 45 45 30 54 56  52 57 59 55 5a 33 4d 48  |hNVEE0TVRWYUZ3MH|
	000002b0  70 4f 56 45 56 35 54 56  52 4e 65 45 31 55 52 58  |pOVEV5TVRNeE1URX|
	000002c0  70 4e 56 46 5a 68 54 55  4a 56 65 41 70 46 65 6b  |pNVFZhTUJVeApFek|
	000002d0  46 53 51 6d 64 4f 56 6b  4a 42 54 56 52 44 62 58  |FSQmdOVkJBTVRDbX|
	000002e0  51 78 57 57 31 57 65 57  4a 74 56 6a 42 61 57 45  |QxWW1WeWJtVjBaWE|
	000002f0  31 33 5a 32 64 46 61 55  31 42 4d 45 64 44 55 33  |13Z2dFaU1BMEdDU3|
	00000300  46 48 55 30 6c 69 4d 30  52 52 52 55 4a 42 55 56  |FHU0liM0RRRUJBUV|
	00000310  56 42 51 54 52 4a 51 6b  52 33 51 58 64 6e 5a 30  |VBQTRJQkR3QXdnZ0|
	00000320  56 4c 43 6b 46 76 53 55  4a 42 55 55 52 34 4d 6a  |VLCkFvSUJBUUR4Mj|
	00000330  56 79 64 55 35 61 54 6e  42 70 62 6a 68 30 51 54  |VydU5aTnBpbjh0QT|
	00000340  46 4d 52 6d 39 71 4e 31  6c 74 4e 44 56 6b 4d 30  |FMRm9qN1ltNDVkM0|
	00000350  4e 30 65 56 42 71 57 47  73 30 53 6d 4a 31 54 46  |N0eVBqWGs0SmJ1TF|
	00000360  70 53 56 55 55 33 61 6e  6c 59 53 7a 64 6d 4d 6e  |pSVUU3anlYSzdmMn|
	00000370  70 49 53 44 45 35 4d 46  59 4b 54 58 42 36 4e 31  |pISDE5MFYKTXB6N1|
	00000380  46 59 62 7a 64 32 53 31  70 6f 65 46 6c 70 55 7a  |FYbzd2S1poeFlpUz|
	00000390  64 46 61 46 52 4d 65 56  4e 6d 4f 47 68 6f 53 32  |dFaFRMeVNmOGhoS2|
	000003a0  59 34 56 32 55 34 5a 6b  6c 58 55 6b 46 45 4f 56  |Y4V2U4ZklXUkFEOV|
	000003b0  6c 55 56 57 70 61 59 6c  4a 6a 63 6a 4e 6f 52 46  |lUVWpaYlJjcjNoRF|
	000003c0  6f 72 57 44 68 6c 62 47  70 78 52 48 41 31 51 51  |orWDhlbGpxRHA1QQ|
	000003d0  70 52 65 44 4e 34 62 32  52 55 4d 6c 68 4c 4e 31  |pReDN4b2RUMlhLN1|
	000003e0  56 72 54 6e 68 50 64 6d  39 36 5a 58 68 42 4c 32  |VrTnhPdm96ZXhBL2|
	000003f0  74 53 4f 58 6c 78 65 6d  4a 72 57 56 70 7a 61 30  |tSOXlxemJrWVpza0|
	00000400  46 42 59 58 46 5a 51 54  4e 71 52 7a 42 4a 63 56  |FBYXFZQTNqRzBJcV|
	00000410  4e 46 51 58 68 4c 52 56  68 4c 57 6e 64 4d 4d 6d  |NFQXhLRVhLWndMMm|
	00000420  77 77 59 32 45 77 43 6e  70 4b 52 6a 6c 6e 62 54  |wwY2EwCnpKRjlnbT|
	00000430  4e 33 64 33 68 4f 55 47  64 49 53 57 4e 73 4e 57  |N3d3hOUGdISWNsNW|
	00000440  68 68 63 45 56 46 4e 54  56 36 63 30 67 77 65 47  |hhcEVFNTV6c0gweG|
	00000450  68 76 57 58 64 4b 5a 32  78 53 4f 57 31 6d 5a 6c  |hvWXdKZ2xSOW1mZl|
	00000460  64 4e 4e 7a 68 6b 52 32  4d 78 55 33 68 79 52 57  |dNNzhkR2MxU3hyRW|
	00000470  46 30 65 6d 5a 52 4f 44  46 72 59 55 77 4b 56 6b  |F0emZRODFrYUwKVk|
	00000480  52 50 5a 30 56 6b 62 58  4e 4b 65 48 52 4a 55 32  |RPZ0VkbXNKeHRJU2|
	00000490  45 31 53 6d 74 43 54 58  56 52 5a 7a 56 52 57 57  |E1SmtCTXVRZzVRWW|
	000004a0  39 43 51 57 4e 78 55 30  6b 31 64 33 52 30 4e 47  |9CQWNxU0k1d3R0NG|
	000004b0  6b 34 61 6c 6f 76 64 6e  70 45 64 6d 56 36 4e 48  |k4alovdnpEdmV6NH|
	000004c0  67 33 55 45 64 77 63 56  70 5a 56 54 5a 4c 54 33  |g3UEdwcVpZVTZLT3|
	000004d0  6c 48 5a 67 70 75 64 46  42 4a 55 55 49 33 64 55  |lHZgpudFBJUUI3dU|
	000004e0  46 6f 52 57 78 4d 65 46  52 53 4b 7a 4a 68 57 54  |FoRWxMeFRSKzJhWT|
	000004f0  63 7a 53 55 39 73 63 56  6b 76 51 57 64 4e 51 6b  |czSU9scVkvQWdNQk|
	00000500  46 42 52 32 70 58 56 45  4a 59 54 55 45 30 52 30  |FBR2pXVEJYTUE0R0|
	00000510  45 78 56 57 52 45 64 30  56 43 4c 33 64 52 52 55  |ExVWREd0VCL3dRRU|
	00000520  46 33 53 55 4e 77 52 45  46 51 43 6b 4a 6e 54 6c  |F3SUNwREFQCkJnTl|
	00000530  5a 49 55 6b 31 43 51 57  59 34 52 55 4a 55 51 55  |ZIUk1CQWY4RUJUQU|
	00000540  52 42 55 55 67 76 54 55  49 77 52 30 45 78 56 57  |RBUUgvTUIwR0ExVW|
	00000550  52 45 5a 31 46 58 51 6b  4a 53 55 79 39 56 4e 44  |REZ1FXQkJSUy9VND|
	00000560  4e 73 59 6a 51 32 62 45  70 74 52 48 6b 78 57 56  |NsYjQ2bEptRHkxWV|
	00000570  42 7a 56 6e 64 43 4d 31  4a 44 55 44 52 45 51 56  |BzVndCM1JDUDREQV|
	00000580  59 4b 51 6d 64 4f 56 6b  68 53 52 55 56 45 61 6b  |YKQmdOVkhSRUVEak|
	00000590  46 4e 5a 32 64 77 63 6d  52 58 53 6d 78 6a 62 54  |FNZ2dwcmRXSmxjbT|
	000005a0  56 73 5a 45 64 57 65 6b  31 42 4d 45 64 44 55 33  |VsZEdWek1BMEdDU3|
	000005b0  46 48 55 30 6c 69 4d 30  52 52 52 55 4a 44 64 31  |FHU0liM0RRRUJDd1|
	000005c0  56 42 51 54 52 4a 51 6b  46 52 51 33 56 34 64 31  |VBQTRJQkFRQ3V4d1|
	000005d0  46 4a 54 6a 64 6f 61 41  70 59 59 6d 52 4c 63 30  |FJTjdoaApYYmRLc0|
	000005e0  39 72 56 33 42 42 52 46  68 75 59 57 35 4f 57 48  |9rV3BBRFhuYW5OWH|
	000005f0  5a 4b 63 45 52 6b 54 57  38 35 61 47 38 79 54 58  |ZKcERkTW85aG8yTX|
	00000600  70 77 63 55 78 30 4f 58  46 6b 61 55 52 61 59 6b  |pwcUx0OXFkaURaYk|
	00000610  78 76 63 44 52 44 59 6e  67 72 63 30 70 53 59 7a  |xvcDRDYngrc0pSYz|
	00000620  6c 6d 62 33 4e 59 52 7a  56 58 61 6a 46 56 43 6b  |lmb3NYRzVXajFVCk|
	00000630  46 79 62 6e 6c 6a 5a 6b  6c 34 62 6b 5a 56 4d 57  |FybnljZkl4bkZVMW|
	00000640  74 70 53 48 4e 6e 64 32  55 35 4e 47 35 43 59 55  |tpSHNnd2U5NG5CYU|
	00000650  64 6a 59 56 4e 44 63 58  64 54 62 55 56 49 65 44  |djYVNDcXdTbUVIeD|
	00000660  52 73 4d 56 5a 36 53 69  74 6c 64 30 35 57 62 32  |RsMVZ6Sitld05Wb2|
	00000670  34 34 63 6e 42 75 55 30  74 59 65 47 46 52 55 33  |44cnBuU0tYeGFRU3|
	00000680  41 30 4e 6d 55 4b 4b 30  64 69 64 55 31 34 55 48  |A0NmUKK0didU14UH|
	00000690  64 56 5a 46 6c 69 64 6a  5a 32 57 44 4a 43 64 7a  |dVZFlidjZ2WDJCdz|
	000006a0  56 70 64 33 64 43 61 56  42 32 4f 57 78 43 4f 44  |Vpd3dCaVB2OWxCOD|
	000006b0  45 7a 54 6e 4a 6e 61 31  52 5a 4b 30 46 70 65 55  |EzTnJna1RZK0FpeU|
	000006c0  70 47 61 33 64 79 62 6a  46 51 53 6d 4e 55 64 6d  |pGa3dybjFQSmNUdm|
	000006d0  64 74 64 55 4a 4e 53 6b  34 7a 52 77 70 79 63 45  |dtdUJNSk4zRwpycE|
	000006e0  74 4d 55 48 6b 32 55 7a  4a 7a 4f 44 49 35 62 54  |tMUHk2UzJzODI5bT|
	000006f0  64 57 63 45 77 7a 4f 55  4d 30 4d 47 74 54 62 45  |dWcEwzOUM0MGtTbE|
	00000700  39 42 59 6d 78 4a 52 56  70 50 53 55 56 32 63 6d  |9BYmxJRVpPSUV2cm|
	00000710  46 44 65 56 68 58 54 32  68 55 62 47 56 56 56 6c  |FDeVhXT2hUbGVVVl|
	00000720  56 4c 5a 48 42 50 4e 56  5a 34 55 30 67 76 5a 56  |VLZHBPNVZ4U0gvZV|
	00000730  46 45 43 6c 64 48 57 6e  46 76 53 6a 42 35 57 56  |FECldHWnFvSjB5WV|
	00000740  4e 52 54 47 74 48 54 43  38 78 52 79 39 71 52 32  |NRTGtHTC8xRy9qR2|
	00000750  56 6c 51 57 68 4b 62 6e  56 72 4f 56 70 73 4e 54  |VlQWhKbnVrOVpsNT|
	00000760  63 79 4e 6a 63 77 62 48  4e 71 56 48 46 4d 51 55  |cyNjcwbHNqVHFMQU|
	00000770  35 68 55 69 74 58 53 30  64 47 64 7a 46 4a 4d 6e  |5hUitXS0dGdzFJMn|
	00000780  52 7a 57 56 5a 49 54 45  38 4b 62 57 35 4c 54 6e  |RzWVZITE8KbW5LTn|
	00000790  42 49 61 79 74 72 56 56  4a 34 43 69 30 74 4c 53  |BIaytrVVJ4Ci0tLS|
	000007a0  30 74 52 55 35 45 49 45  4e 46 55 6c 52 4a 52 6b  |0tRU5EIENFUlRJRk|
	000007b0  6c 44 51 56 52 46 4c 53  30 74 4c 53 30 4b 0a 20  |lDQVRFLS0tLS0K. |
	000007c0  20 20 20 73 65 72 76 65  72 3a 20 68 74 74 70 73  |   server: https|
	000007d0  3a 2f 2f 6b 69 6e 64 2d  63 6f 6e 74 72 6f 6c 2d  |://kind-control-|
	000007e0  70 6c 61 6e 65 3a 36 34  34 33 0a 20 20 6e 61 6d  |plane:6443.  nam|
	000007f0  65 3a 20 22 22 0a 63 6f  6e 74 65 78 74 73 3a 20  |e: "".contexts: |
	00000800  6e 75 6c 6c 0a 63 75 72  72 65 6e 74 2d 63 6f 6e  |null.current-con|
	00000810  74 65 78 74 3a 20 22 22  0a 6b 69 6e 64 [truncated 178 chars]
 >
I1215 14:50:15.303934     230 token.go:113] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "172.18.0.2:6443"
I1215 14:50:15.303958     230 discovery.go:53] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1215 14:50:15.303971     230 join.go:567] [preflight] Fetching init configuration
I1215 14:50:15.303980     230 join.go:681] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
I1215 14:50:15.304427     230 type.go:165] "Request Body" body=""
I1215 14:50:15.304525     230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" -H "Authorization: Bearer <masked>" 'https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s'
 >
I1215 14:50:15.305280     230 round_trippers.go:547] "HTTP Trace: DNS Lookup resolved" host="kind-control-plane" address=[{"IP":"172.18.0.2","Zone":""},{"IP":"fc00:****::2","Zone":""}]
I1215 14:50:15.305519     230 round_trippers.go:562] "HTTP Trace: Dial succeed" network="tcp" address="172.18.0.2:6443"
I1215 14:50:15.312270     230 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s" status="200 OK" headers=<
	Audit-Id: dc851cb8-00d2-499a-9d93-791b645d3175
	Cache-Control: no-cache, private
	Content-Length: 935
	Content-Type: application/vnd.kubernetes.protobuf
	Date: Mon, 15 Dec 2025 14:50:15 GMT
	X-Kubernetes-Pf-Flowschema-Uid: f54c5d84-c2d8-4cbb-8801-07adb48ce73d
	X-Kubernetes-Pf-Prioritylevel-Uid: 7df0c0a7-2f8a-40ca-9625-0b6635feccd2
 > milliseconds=7 dnsLookupMilliseconds=0 dialMilliseconds=0 tlsHandshakeMilliseconds=4 serverProcessingMilliseconds=2
I1215 14:50:15.312385     230 type.go:165] "Response Body" body=<
	00000000  6b 38 73 00 0a 0f 0a 02  76 31 12 09 43 6f 6e 66  |k8s.....v1..Conf|
	00000010  69 67 4d 61 70 12 8b 07  0a b9 01 0a 0e 6b 75 62  |igMap........kub|
	00000020  65 61 64 6d 2d 63 6f 6e  66 69 67 12 00 1a 0b 6b  |eadm-config....k|
	00000030  75 62 65 2d 73 79 73 74  65 6d 22 00 2a 24 38 66  |ube-system".*$8f|
	00000040  37 61 38 66 35 34 2d 37  39 62 65 2d 34 34 61 35  |7a8f54-79be-44a5|
	00000050  2d 61 63 35 63 2d 34 34  32 34 64 33 66 37 39 39  |-ac5c-4424d3f799|
	00000060  30 39 32 03 32 30 37 38  00 42 08 08 d7 da ff c9  |092.2078.B......|
	00000070  06 10 00 8a 01 5e 0a 07  6b 75 62 65 61 64 6d 12  |.....^..kubeadm.|
	00000080  06 55 70 64 61 74 65 1a  02 76 31 22 08 08 d7 da  |.Update..v1"....|
	00000090  ff c9 06 10 00 32 08 46  69 65 6c 64 73 56 31 3a  |.....2.FieldsV1:|
	000000a0  31 0a 2f 7b 22 66 3a 64  61 74 61 22 3a 7b 22 2e  |1./{"f:data":{".|
	000000b0  22 3a 7b 7d 2c 22 66 3a  43 6c 75 73 74 65 72 43  |":{},"f:ClusterC|
	000000c0  6f 6e 66 69 67 75 72 61  74 69 6f 6e 22 3a 7b 7d  |onfiguration":{}|
	000000d0  7d 7d 42 00 12 cc 05 0a  14 43 6c 75 73 74 65 72  |}}B......Cluster|
	000000e0  43 6f 6e 66 69 67 75 72  61 74 69 6f 6e 12 b3 05  |Configuration...|
	000000f0  61 70 69 53 65 72 76 65  72 3a 0a 20 20 63 65 72  |apiServer:.  cer|
	00000100  74 53 41 4e 73 3a 0a 20  20 2d 20 6c 6f 63 61 6c  |tSANs:.  - local|
	00000110  68 6f 73 74 0a 20 20 2d  20 31 32 37 2e 30 2e 30  |host.  - 127.0.0|
	00000120  2e 31 0a 20 20 65 78 74  72 61 41 72 67 73 3a 0a  |.1.  extraArgs:.|
	00000130  20 20 2d 20 6e 61 6d 65  3a 20 72 75 6e 74 69 6d  |  - name: runtim|
	00000140  65 2d 63 6f 6e 66 69 67  0a 20 20 20 20 76 61 6c  |e-config.    val|
	00000150  75 65 3a 20 22 22 0a 61  70 69 56 65 72 73 69 6f  |ue: "".apiVersio|
	00000160  6e 3a 20 6b 75 62 65 61  64 6d 2e 6b 38 73 2e 69  |n: kubeadm.k8s.i|
	00000170  6f 2f 76 31 62 65 74 61  34 0a 63 61 43 65 72 74  |o/v1beta4.caCert|
	00000180  69 66 69 63 61 74 65 56  61 6c 69 64 69 74 79 50  |ificateValidityP|
	00000190  65 72 69 6f 64 3a 20 38  37 36 30 30 68 30 6d 30  |eriod: 87600h0m0|
	000001a0  73 0a 63 65 72 74 69 66  69 63 61 74 65 56 61 6c  |s.certificateVal|
	000001b0  69 64 69 74 79 50 65 72  69 6f 64 3a 20 38 37 36  |idityPeriod: 876|
	000001c0  30 68 30 6d 30 73 0a 63  65 72 74 69 66 69 63 61  |0h0m0s.certifica|
	000001d0  74 65 73 44 69 72 3a 20  2f 65 74 63 2f 6b 75 62  |tesDir: /etc/kub|
	000001e0  65 72 6e 65 74 65 73 2f  70 6b 69 0a 63 6c 75 73  |ernetes/pki.clus|
	000001f0  74 65 72 4e 61 6d 65 3a  20 6b 69 6e 64 0a 63 6f  |terName: kind.co|
	00000200  6e 74 72 6f 6c 50 6c 61  6e 65 45 6e 64 70 6f 69  |ntrolPlaneEndpoi|
	00000210  6e 74 3a 20 6b 69 6e 64  2d 63 6f 6e 74 72 6f 6c  |nt: kind-control|
	00000220  2d 70 6c 61 6e 65 3a 36  34 34 33 0a 63 6f 6e 74  |-plane:6443.cont|
	00000230  72 6f 6c 6c 65 72 4d 61  6e 61 67 65 72 3a 0a 20  |rollerManager:. |
	00000240  20 65 78 74 72 61 41 72  67 73 3a 0a 20 20 2d 20  | extraArgs:.  - |
	00000250  6e 61 6d 65 3a 20 65 6e  61 62 6c 65 2d 68 6f 73  |name: enable-hos|
	00000260  74 70 61 74 68 2d 70 72  6f 76 69 73 69 6f 6e 65  |tpath-provisione|
	00000270  72 0a 20 20 20 20 76 61  6c 75 65 3a 20 22 74 72  |r.    value: "tr|
	00000280  75 65 22 0a 64 6e 73 3a  20 7b 7d 0a 65 6e 63 72  |ue".dns: {}.encr|
	00000290  79 70 74 69 6f 6e 41 6c  67 6f 72 69 74 68 6d 3a  |yptionAlgorithm:|
	000002a0  20 52 53 41 2d 32 30 34  38 0a 65 74 63 64 3a 0a  | RSA-2048.etcd:.|
	000002b0  20 20 6c 6f 63 61 6c 3a  0a 20 20 20 20 64 61 74  |  local:.    dat|
	000002c0  61 44 69 72 3a 20 2f 76  61 72 2f 6c 69 62 2f 65  |aDir: /var/lib/e|
	000002d0  74 63 64 0a 69 6d 61 67  65 52 65 70 6f 73 69 74  |tcd.imageReposit|
	000002e0  6f 72 79 3a 20 72 65 67  69 73 74 72 79 2e 6b 38  |ory: registry.k8|
	000002f0  73 2e 69 6f 0a 6b 69 6e  64 3a 20 43 6c 75 73 74  |s.io.kind: Clust|
	00000300  65 72 43 6f 6e 66 69 67  75 72 61 74 69 6f 6e 0a  |erConfiguration.|
	00000310  6b 75 62 65 72 6e 65 74  65 73 56 65 72 73 69 6f  |kubernetesVersio|
	00000320  6e 3a 20 76 31 2e 33 34  2e 30 0a 6e 65 74 77 6f  |n: v1.34.0.netwo|
	00000330  72 6b 69 6e 67 3a 0a 20  20 64 6e 73 44 6f 6d 61  |rking:.  dnsDoma|
	00000340  69 6e 3a 20 63 6c 75 73  74 65 72 2e 6c 6f 63 61  |in: cluster.loca|
	00000350  6c 0a 20 20 70 6f 64 53  75 62 6e 65 74 3a 20 31  |l.  podSubnet: 1|
	00000360  30 2e 32 34 34 2e 30 2e  30 2f 31 36 0a 20 20 73  |0.244.0.0/16.  s|
	00000370  65 72 76 69 63 65 53 75  62 6e 65 74 3a 20 31 30  |erviceSubnet: 10|
	00000380  2e 39 36 2e 30 2e 30 2f  31 36 0a 70 72 6f 78 79  |.96.0.0/16.proxy|
	00000390  3a 20 7b 7d 0a 73 63 68  65 64 75 6c 65 72 3a 20  |: {}.scheduler: |
	000003a0  7b 7d 0a 1a 00 22 00                              |{}...".|
 >
I1215 14:50:15.313552     230 kubeproxy.go:55] attempting to download the KubeProxyConfiguration from ConfigMap "kube-proxy"
I1215 14:50:15.313605     230 type.go:165] "Request Body" body=""
I1215 14:50:15.313716     230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Authorization: Bearer <masked>" -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" 'https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s'
 >
I1215 14:50:15.315629     230 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-proxy?timeout=10s" status="200 OK" headers=<
	Audit-Id: 7da1f078-b1a7-40b7-8b9d-595fb32485b5
	Cache-Control: no-cache, private
	Content-Length: 2089
	Content-Type: application/vnd.kubernetes.protobuf
	Date: Mon, 15 Dec 2025 14:50:15 GMT
	X-Kubernetes-Pf-Flowschema-Uid: f54c5d84-c2d8-4cbb-8801-07adb48ce73d
	X-Kubernetes-Pf-Prioritylevel-Uid: 7df0c0a7-2f8a-40ca-9625-0b6635feccd2
 > milliseconds=1 getConnectionMilliseconds=0 serverProcessingMilliseconds=1
I1215 14:50:15.315778     230 type.go:165] "Response Body" body=<
	00000000  6b 38 73 00 0a 0f 0a 02  76 31 12 09 43 6f 6e 66  |k8s.....v1..Conf|
	00000010  69 67 4d 61 70 12 8d 10  0a 85 02 0a 0a 6b 75 62  |igMap........kub|
	00000020  65 2d 70 72 6f 78 79 12  00 1a 0b 6b 75 62 65 2d  |e-proxy....kube-|
	00000030  73 79 73 74 65 6d 22 00  2a 24 34 31 37 66 31 65  |system".*$417f1e|
	00000040  38 32 2d 31 33 33 63 2d  34 65 38 32 2d 38 62 66  |82-133c-4e82-8bf|
	00000050  36 2d 38 66 34 36 34 31  33 31 33 61 66 37 32 03  |6-8f4641313af72.|
	00000060  32 33 38 38 00 42 08 08  d8 da ff c9 06 10 00 5a  |2388.B.........Z|
	00000070  11 0a 03 61 70 70 12 0a  6b 75 62 65 2d 70 72 6f  |...app..kube-pro|
	00000080  78 79 8a 01 9a 01 0a 07  6b 75 62 65 61 64 6d 12  |xy......kubeadm.|
	00000090  06 55 70 64 61 74 65 1a  02 76 31 22 08 08 d8 da  |.Update..v1"....|
	000000a0  ff c9 06 10 00 32 08 46  69 65 6c 64 73 56 31 3a  |.....2.FieldsV1:|
	000000b0  6d 0a 6b 7b 22 66 3a 64  61 74 61 22 3a 7b 22 2e  |m.k{"f:data":{".|
	000000c0  22 3a 7b 7d 2c 22 66 3a  63 6f 6e 66 69 67 2e 63  |":{},"f:config.c|
	000000d0  6f 6e 66 22 3a 7b 7d 2c  22 66 3a 6b 75 62 65 63  |onf":{},"f:kubec|
	000000e0  6f 6e 66 69 67 2e 63 6f  6e 66 22 3a 7b 7d 7d 2c  |onfig.conf":{}},|
	000000f0  22 66 3a 6d 65 74 61 64  61 74 61 22 3a 7b 22 66  |"f:metadata":{"f|
	00000100  3a 6c 61 62 65 6c 73 22  3a 7b 22 2e 22 3a 7b 7d  |:labels":{".":{}|
	00000110  2c 22 66 3a 61 70 70 22  3a 7b 7d 7d 7d 7d 42 00  |,"f:app":{}}}}B.|
	00000120  12 d1 0a 0a 0b 63 6f 6e  66 69 67 2e 63 6f 6e 66  |.....config.conf|
	00000130  12 c1 0a 61 70 69 56 65  72 73 69 6f 6e 3a 20 6b  |...apiVersion: k|
	00000140  75 62 65 70 72 6f 78 79  2e 63 6f 6e 66 69 67 2e  |ubeproxy.config.|
	00000150  6b 38 73 2e 69 6f 2f 76  31 61 6c 70 68 61 31 0a  |k8s.io/v1alpha1.|
	00000160  62 69 6e 64 41 64 64 72  65 73 73 3a 20 30 2e 30  |bindAddress: 0.0|
	00000170  2e 30 2e 30 0a 62 69 6e  64 41 64 64 72 65 73 73  |.0.0.bindAddress|
	00000180  48 61 72 64 46 61 69 6c  3a 20 66 61 6c 73 65 0a  |HardFail: false.|
	00000190  63 6c 69 65 6e 74 43 6f  6e 6e 65 63 74 69 6f 6e  |clientConnection|
	000001a0  3a 0a 20 20 61 63 63 65  70 74 43 6f 6e 74 65 6e  |:.  acceptConten|
	000001b0  74 54 79 70 65 73 3a 20  22 22 0a 20 20 62 75 72  |tTypes: "".  bur|
	000001c0  73 74 3a 20 30 0a 20 20  63 6f 6e 74 65 6e 74 54  |st: 0.  contentT|
	000001d0  79 70 65 3a 20 22 22 0a  20 20 6b 75 62 65 63 6f  |ype: "".  kubeco|
	000001e0  6e 66 69 67 3a 20 2f 76  61 72 2f 6c 69 62 2f 6b  |nfig: /var/lib/k|
	000001f0  75 62 65 2d 70 72 6f 78  79 2f 6b 75 62 65 63 6f  |ube-proxy/kubeco|
	00000200  6e 66 69 67 2e 63 6f 6e  66 0a 20 20 71 70 73 3a  |nfig.conf.  qps:|
	00000210  20 30 0a 63 6c 75 73 74  65 72 43 49 44 52 3a 20  | 0.clusterCIDR: |
	00000220  31 30 2e 32 34 34 2e 30  2e 30 2f 31 36 0a 63 6f  |10.244.0.0/16.co|
	00000230  6e 66 69 67 53 79 6e 63  50 65 72 69 6f 64 3a 20  |nfigSyncPeriod: |
	00000240  30 73 0a 63 6f 6e 6e 74  72 61 63 6b 3a 0a 20 20  |0s.conntrack:.  |
	00000250  6d 61 78 50 65 72 43 6f  72 65 3a 20 30 0a 20 20  |maxPerCore: 0.  |
	00000260  6d 69 6e 3a 20 6e 75 6c  6c 0a 20 20 74 63 70 42  |min: null.  tcpB|
	00000270  65 4c 69 62 65 72 61 6c  3a 20 66 61 6c 73 65 0a  |eLiberal: false.|
	00000280  20 20 74 63 70 43 6c 6f  73 65 57 61 69 74 54 69  |  tcpCloseWaitTi|
	00000290  6d 65 6f 75 74 3a 20 6e  75 6c 6c 0a 20 20 74 63  |meout: null.  tc|
	000002a0  70 45 73 74 61 62 6c 69  73 68 65 64 54 69 6d 65  |pEstablishedTime|
	000002b0  6f 75 74 3a 20 6e 75 6c  6c 0a 20 20 75 64 70 53  |out: null.  udpS|
	000002c0  74 72 65 61 6d 54 69 6d  65 6f 75 74 3a 20 30 73  |treamTimeout: 0s|
	000002d0  0a 20 20 75 64 70 54 69  6d 65 6f 75 74 3a 20 30  |.  udpTimeout: 0|
	000002e0  73 0a 64 65 74 65 63 74  4c 6f 63 61 6c 3a 0a 20  |s.detectLocal:. |
	000002f0  20 62 72 69 64 67 65 49  6e 74 65 72 66 61 63 65  | bridgeInterface|
	00000300  3a 20 22 22 0a 20 20 69  6e 74 65 72 66 61 63 65  |: "".  interface|
	00000310  4e 61 6d 65 50 72 65 66  69 78 3a 20 22 22 0a 64  |NamePrefix: "".d|
	00000320  65 74 65 63 74 4c 6f 63  61 6c 4d 6f 64 65 3a 20  |etectLocalMode: |
	00000330  22 22 0a 65 6e 61 62 6c  65 50 72 6f 66 69 6c 69  |"".enableProfili|
	00000340  6e 67 3a 20 66 61 6c 73  65 0a 68 65 61 6c 74 68  |ng: false.health|
	00000350  7a 42 69 6e 64 41 64 64  72 65 73 73 3a 20 22 22  |zBindAddress: ""|
	00000360  0a 68 6f 73 74 6e 61 6d  65 4f 76 65 72 72 69 64  |.hostnameOverrid|
	00000370  65 3a 20 22 22 0a 69 70  74 61 62 6c 65 73 3a 0a  |e: "".iptables:.|
	00000380  20 20 6c 6f 63 61 6c 68  6f 73 74 4e 6f 64 65 50  |  localhostNodeP|
	00000390  6f 72 74 73 3a 20 6e 75  6c 6c 0a 20 20 6d 61 73  |orts: null.  mas|
	000003a0  71 75 65 72 61 64 65 41  6c 6c 3a 20 66 61 6c 73  |queradeAll: fals|
	000003b0  65 0a 20 20 6d 61 73 71  75 65 72 61 64 65 42 69  |e.  masqueradeBi|
	000003c0  74 3a 20 6e 75 6c 6c 0a  20 20 6d 69 6e 53 79 6e  |t: null.  minSyn|
	000003d0  63 50 65 72 69 6f 64 3a  20 31 73 0a 20 20 73 79  |cPeriod: 1s.  sy|
	000003e0  6e 63 50 65 72 69 6f 64  3a 20 30 73 0a 69 70 76  |ncPeriod: 0s.ipv|
	000003f0  73 3a 0a 20 20 65 78 63  6c 75 64 65 43 49 44 52  |s:.  excludeCIDR|
	00000400  73 3a 20 6e 75 6c 6c 0a  20 20 6d 69 6e 53 79 6e  |s: null.  minSyn|
	00000410  63 50 65 72 69 6f 64 3a  20 30 73 0a 20 20 73 63  |cPeriod: 0s.  sc|
	00000420  68 65 64 75 6c 65 72 3a  20 22 22 0a 20 20 73 74  |heduler: "".  st|
	00000430  72 69 63 74 41 52 50 3a  20 66 61 6c 73 65 0a 20  |rictARP: false. |
	00000440  20 73 79 6e 63 50 65 72  69 6f 64 3a 20 30 73 0a  | syncPeriod: 0s.|
	00000450  20 20 74 63 70 46 69 6e  54 69 6d 65 6f 75 74 3a  |  tcpFinTimeout:|
	00000460  20 30 73 0a 20 20 74 63  70 54 69 6d 65 6f 75 74  | 0s.  tcpTimeout|
	00000470  3a 20 30 73 0a 20 20 75  64 70 54 69 6d 65 6f 75  |: 0s.  udpTimeou|
	00000480  74 3a 20 30 73 0a 6b 69  6e 64 3a 20 4b 75 62 65  |t: 0s.kind: Kube|
	00000490  50 72 6f 78 79 43 6f 6e  66 69 67 75 72 61 74 69  |ProxyConfigurati|
	000004a0  6f 6e 0a 6c 6f 67 67 69  6e 67 3a 0a 20 20 66 6c  |on.logging:.  fl|
	000004b0  75 73 68 46 72 65 71 75  65 6e 63 79 3a 20 30 0a  |ushFrequency: 0.|
	000004c0  20 20 6f 70 74 69 6f 6e  73 3a 0a 20 20 20 20 6a  |  options:.    j|
	000004d0  73 6f 6e 3a 0a 20 20 20  20 20 20 69 6e 66 6f 42  |son:.      infoB|
	000004e0  75 66 66 65 72 53 69 7a  65 3a 20 22 30 22 0a 20  |ufferSize: "0". |
	000004f0  20 20 20 74 65 78 74 3a  0a 20 20 20 20 20 20 69  |   text:.      i|
	00000500  6e 66 6f 42 75 66 66 65  72 53 69 7a 65 3a 20 22  |nfoBufferSize: "|
	00000510  30 22 0a 20 20 76 65 72  62 6f 73 69 74 79 3a 20  |0".  verbosity: |
	00000520  30 0a 6d 65 74 72 69 63  73 42 69 6e 64 41 64 64  |0.metricsBindAdd|
	00000530  72 65 73 73 3a 20 22 22  0a 6d 6f 64 65 3a 20 69  |ress: "".mode: i|
	00000540  70 74 61 62 6c 65 73 0a  6e 66 74 61 62 6c 65 73  |ptables.nftables|
	00000550  3a 0a 20 20 6d 61 73 71  75 65 72 61 64 65 41 6c  |:.  masqueradeAl|
	00000560  6c 3a 20 66 61 6c 73 65  0a 20 20 6d 61 73 71 75  |l: false.  masqu|
	00000570  65 72 61 64 65 42 69 74  3a 20 6e 75 6c 6c 0a 20  |eradeBit: null. |
	00000580  20 6d 69 6e 53 79 6e 63  50 65 72 69 6f 64 3a 20  | minSyncPeriod: |
	00000590  30 73 0a 20 20 73 79 6e  63 50 65 72 69 6f 64 3a  |0s.  syncPeriod:|
	000005a0  20 30 73 0a 6e 6f 64 65  50 6f 72 74 41 64 64 72  | 0s.nodePortAddr|
	000005b0  65 73 73 65 73 3a 20 6e  75 6c 6c 0a 6f 6f 6d 53  |esses: null.oomS|
	000005c0  63 6f 72 65 41 64 6a 3a  20 6e 75 6c 6c 0a 70 6f  |coreAdj: null.po|
	000005d0  72 74 52 61 6e 67 65 3a  20 22 22 0a 73 68 6f 77  |rtRange: "".show|
	000005e0  48 69 64 64 65 6e 4d 65  74 72 69 63 73 46 6f 72  |HiddenMetricsFor|
	000005f0  56 65 72 73 69 6f 6e 3a  20 22 22 0a 77 69 6e 6b  |Version: "".wink|
	00000600  65 72 6e 65 6c 3a 0a 20  20 65 6e 61 62 6c 65 44  |ernel:.  enableD|
	00000610  53 52 3a 20 66 61 6c 73  65 0a 20 20 66 6f 72 77  |SR: false.  forw|
	00000620  61 72 64 48 65 61 6c 74  68 43 68 65 63 6b 56 69  |ardHealthCheckVi|
	00000630  70 3a 20 66 61 6c 73 65  0a 20 20 6e 65 74 77 6f  |p: false.  netwo|
	00000640  72 6b 4e 61 6d 65 3a 20  22 22 0a 20 20 72 6f 6f  |rkName: "".  roo|
	00000650  74 48 6e 73 45 6e 64 70  6f 69 6e 74 4e 61 6d 65  |tHnsEndpointName|
	00000660  3a 20 22 22 0a 20 20 73  6f 75 72 63 65 56 69 70  |: "".  sourceVip|
	00000670  3a 20 22 22 12 ae 03 0a  0f 6b 75 62 65 63 6f 6e  |: "".....kubecon|
	00000680  66 69 67 2e 63 6f 6e 66  12 9a 03 61 70 69 56 65  |fig.conf...apiVe|
	00000690  72 73 69 6f 6e 3a 20 76  31 0a 6b 69 6e 64 3a 20  |rsion: v1.kind: |
	000006a0  43 6f 6e 66 69 67 0a 63  6c 75 73 74 65 72 73 3a  |Config.clusters:|
	000006b0  0a 2d 20 63 6c 75 73 74  65 72 3a 0a 20 20 20 20  |.- cluster:.    |
	000006c0  63 65 72 74 69 66 69 63  61 74 65 2d 61 75 74 68  |certificate-auth|
	000006d0  6f 72 69 74 79 3a 20 2f  76 61 72 2f 72 75 6e 2f  |ority: /var/run/|
	000006e0  73 65 63 72 65 74 73 2f  6b 75 62 65 72 6e 65 74  |secrets/kubernet|
	000006f0  65 73 2e 69 6f 2f 73 65  72 76 69 63 65 61 63 63  |es.io/serviceacc|
	00000700  6f 75 6e 74 2f 63 61 2e  63 72 74 0a 20 20 20 20  |ount/ca.crt.    |
	00000710  73 65 72 76 65 72 3a 20  68 74 74 70 73 3a 2f 2f  |server: https://|
	00000720  6b 69 6e 64 2d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |kind-control-pla|
	00000730  6e 65 3a 36 34 34 33 0a  20 20 6e 61 6d 65 3a 20  |ne:6443.  name: |
	00000740  64 65 66 61 75 6c 74 0a  63 6f 6e 74 65 78 74 73  |default.contexts|
	00000750  3a 0a 2d 20 63 6f 6e 74  65 78 74 3a 0a 20 20 20  |:.- context:.   |
	00000760  20 63 6c 75 73 74 65 72  3a 20 64 65 66 61 75 6c  | cluster: defaul|
	00000770  74 0a 20 20 20 20 6e 61  6d 65 73 70 61 63 65 3a  |t.    namespace:|
	00000780  20 64 65 66 61 75 6c 74  0a 20 20 20 20 75 73 65  | default.    use|
	00000790  72 3a 20 64 65 66 61 75  6c 74 0a 20 20 6e 61 6d  |r: default.  nam|
	000007a0  65 3a 20 64 65 66 61 75  6c 74 0a 63 75 72 72 65  |e: default.curre|
	000007b0  6e 74 2d 63 6f 6e 74 65  78 74 3a 20 64 65 66 61  |nt-context: defa|
	000007c0  75 6c 74 0a 75 73 65 72  73 3a 0a 2d 20 6e 61 6d  |ult.users:.- nam|
	000007d0  65 3a 20 64 65 66 61 75  6c 74 0a 20 20 75 73 65  |e: default.  use|
	000007e0  72 3a 0a 20 20 20 20 74  6f 6b 65 6e 46 69 6c 65  |r:.    tokenFile|
	000007f0  3a 20 2f 76 61 72 2f 72  75 6e 2f 73 65 63 72 65  |: /var/run/secre|
	00000800  74 73 2f 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ts/kubernetes.io|
	00000810  2f 73 65 72 76 69 63 65  61 63 63 6f 75 [truncated 102 chars]
 >
I1215 14:50:15.319974     230 kubelet.go:73] attempting to download the KubeletConfiguration from ConfigMap "kubelet-config"
I1215 14:50:15.320052     230 type.go:165] "Request Body" body=""
I1215 14:50:15.320143     230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" -H "Authorization: Bearer <masked>" 'https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s'
 >
I1215 14:50:15.322275     230 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kubelet-config?timeout=10s" status="200 OK" headers=<
	Audit-Id: edacaa09-2a92-49f8-992d-467253d02bac
	Cache-Control: no-cache, private
	Content-Length: 1453
	Content-Type: application/vnd.kubernetes.protobuf
	Date: Mon, 15 Dec 2025 14:50:15 GMT
	X-Kubernetes-Pf-Flowschema-Uid: f54c5d84-c2d8-4cbb-8801-07adb48ce73d
	X-Kubernetes-Pf-Prioritylevel-Uid: 7df0c0a7-2f8a-40ca-9625-0b6635feccd2
 > milliseconds=2 getConnectionMilliseconds=0 serverProcessingMilliseconds=1
I1215 14:50:15.322454     230 type.go:165] "Response Body" body=<
	00000000  6b 38 73 00 0a 0f 0a 02  76 31 12 09 43 6f 6e 66  |k8s.....v1..Conf|
	00000010  69 67 4d 61 70 12 91 0b  0a ac 01 0a 0e 6b 75 62  |igMap........kub|
	00000020  65 6c 65 74 2d 63 6f 6e  66 69 67 12 00 1a 0b 6b  |elet-config....k|
	00000030  75 62 65 2d 73 79 73 74  65 6d 22 00 2a 24 30 32  |ube-system".*$02|
	00000040  36 61 32 32 32 63 2d 39  66 65 34 2d 34 35 39 37  |6a222c-9fe4-4597|
	00000050  2d 39 30 38 61 2d 62 35  65 35 64 62 33 35 32 35  |-908a-b5e5db3525|
	00000060  36 64 32 03 32 31 30 38  00 42 08 08 d7 da ff c9  |6d2.2108.B......|
	00000070  06 10 00 8a 01 51 0a 07  6b 75 62 65 61 64 6d 12  |.....Q..kubeadm.|
	00000080  06 55 70 64 61 74 65 1a  02 76 31 22 08 08 d7 da  |.Update..v1"....|
	00000090  ff c9 06 10 00 32 08 46  69 65 6c 64 73 56 31 3a  |.....2.FieldsV1:|
	000000a0  24 0a 22 7b 22 66 3a 64  61 74 61 22 3a 7b 22 2e  |$."{"f:data":{".|
	000000b0  22 3a 7b 7d 2c 22 66 3a  6b 75 62 65 6c 65 74 22  |":{},"f:kubelet"|
	000000c0  3a 7b 7d 7d 7d 42 00 12  df 09 0a 07 6b 75 62 65  |:{}}}B......kube|
	000000d0  6c 65 74 12 d3 09 61 70  69 56 65 72 73 69 6f 6e  |let...apiVersion|
	000000e0  3a 20 6b 75 62 65 6c 65  74 2e 63 6f 6e 66 69 67  |: kubelet.config|
	000000f0  2e 6b 38 73 2e 69 6f 2f  76 31 62 65 74 61 31 0a  |.k8s.io/v1beta1.|
	00000100  61 75 74 68 65 6e 74 69  63 61 74 69 6f 6e 3a 0a  |authentication:.|
	00000110  20 20 61 6e 6f 6e 79 6d  6f 75 73 3a 0a 20 20 20  |  anonymous:.   |
	00000120  20 65 6e 61 62 6c 65 64  3a 20 66 61 6c 73 65 0a  | enabled: false.|
	00000130  20 20 77 65 62 68 6f 6f  6b 3a 0a 20 20 20 20 63  |  webhook:.    c|
	00000140  61 63 68 65 54 54 4c 3a  20 30 73 0a 20 20 20 20  |acheTTL: 0s.    |
	00000150  65 6e 61 62 6c 65 64 3a  20 74 72 75 65 0a 20 20  |enabled: true.  |
	00000160  78 35 30 39 3a 0a 20 20  20 20 63 6c 69 65 6e 74  |x509:.    client|
	00000170  43 41 46 69 6c 65 3a 20  2f 65 74 63 2f 6b 75 62  |CAFile: /etc/kub|
	00000180  65 72 6e 65 74 65 73 2f  70 6b 69 2f 63 61 2e 63  |ernetes/pki/ca.c|
	00000190  72 74 0a 61 75 74 68 6f  72 69 7a 61 74 69 6f 6e  |rt.authorization|
	000001a0  3a 0a 20 20 6d 6f 64 65  3a 20 57 65 62 68 6f 6f  |:.  mode: Webhoo|
	000001b0  6b 0a 20 20 77 65 62 68  6f 6f 6b 3a 0a 20 20 20  |k.  webhook:.   |
	000001c0  20 63 61 63 68 65 41 75  74 68 6f 72 69 7a 65 64  | cacheAuthorized|
	000001d0  54 54 4c 3a 20 30 73 0a  20 20 20 20 63 61 63 68  |TTL: 0s.    cach|
	000001e0  65 55 6e 61 75 74 68 6f  72 69 7a 65 64 54 54 4c  |eUnauthorizedTTL|
	000001f0  3a 20 30 73 0a 63 67 72  6f 75 70 44 72 69 76 65  |: 0s.cgroupDrive|
	00000200  72 3a 20 73 79 73 74 65  6d 64 0a 63 67 72 6f 75  |r: systemd.cgrou|
	00000210  70 52 6f 6f 74 3a 20 2f  6b 75 62 65 6c 65 74 0a  |pRoot: /kubelet.|
	00000220  63 6c 75 73 74 65 72 44  4e 53 3a 0a 2d 20 31 30  |clusterDNS:.- 10|
	00000230  2e 39 36 2e 30 2e 31 30  0a 63 6c 75 73 74 65 72  |.96.0.10.cluster|
	00000240  44 6f 6d 61 69 6e 3a 20  63 6c 75 73 74 65 72 2e  |Domain: cluster.|
	00000250  6c 6f 63 61 6c 0a 63 6f  6e 74 61 69 6e 65 72 52  |local.containerR|
	00000260  75 6e 74 69 6d 65 45 6e  64 70 6f 69 6e 74 3a 20  |untimeEndpoint: |
	00000270  22 22 0a 63 70 75 4d 61  6e 61 67 65 72 52 65 63  |"".cpuManagerRec|
	00000280  6f 6e 63 69 6c 65 50 65  72 69 6f 64 3a 20 30 73  |oncilePeriod: 0s|
	00000290  0a 63 72 61 73 68 4c 6f  6f 70 42 61 63 6b 4f 66  |.crashLoopBackOf|
	000002a0  66 3a 20 7b 7d 0a 65 76  69 63 74 69 6f 6e 48 61  |f: {}.evictionHa|
	000002b0  72 64 3a 0a 20 20 69 6d  61 67 65 66 73 2e 61 76  |rd:.  imagefs.av|
	000002c0  61 69 6c 61 62 6c 65 3a  20 30 25 0a 20 20 6e 6f  |ailable: 0%.  no|
	000002d0  64 65 66 73 2e 61 76 61  69 6c 61 62 6c 65 3a 20  |defs.available: |
	000002e0  30 25 0a 20 20 6e 6f 64  65 66 73 2e 69 6e 6f 64  |0%.  nodefs.inod|
	000002f0  65 73 46 72 65 65 3a 20  30 25 0a 65 76 69 63 74  |esFree: 0%.evict|
	00000300  69 6f 6e 50 72 65 73 73  75 72 65 54 72 61 6e 73  |ionPressureTrans|
	00000310  69 74 69 6f 6e 50 65 72  69 6f 64 3a 20 30 73 0a  |itionPeriod: 0s.|
	00000320  66 61 69 6c 53 77 61 70  4f 6e 3a 20 66 61 6c 73  |failSwapOn: fals|
	00000330  65 0a 66 69 6c 65 43 68  65 63 6b 46 72 65 71 75  |e.fileCheckFrequ|
	00000340  65 6e 63 79 3a 20 30 73  0a 68 65 61 6c 74 68 7a  |ency: 0s.healthz|
	00000350  42 69 6e 64 41 64 64 72  65 73 73 3a 20 31 32 37  |BindAddress: 127|
	00000360  2e 30 2e 30 2e 31 0a 68  65 61 6c 74 68 7a 50 6f  |.0.0.1.healthzPo|
	00000370  72 74 3a 20 31 30 32 34  38 0a 68 74 74 70 43 68  |rt: 10248.httpCh|
	00000380  65 63 6b 46 72 65 71 75  65 6e 63 79 3a 20 30 73  |eckFrequency: 0s|
	00000390  0a 69 6d 61 67 65 47 43  48 69 67 68 54 68 72 65  |.imageGCHighThre|
	000003a0  73 68 6f 6c 64 50 65 72  63 65 6e 74 3a 20 31 30  |sholdPercent: 10|
	000003b0  30 0a 69 6d 61 67 65 4d  61 78 69 6d 75 6d 47 43  |0.imageMaximumGC|
	000003c0  41 67 65 3a 20 30 73 0a  69 6d 61 67 65 4d 69 6e  |Age: 0s.imageMin|
	000003d0  69 6d 75 6d 47 43 41 67  65 3a 20 30 73 0a 6b 69  |imumGCAge: 0s.ki|
	000003e0  6e 64 3a 20 4b 75 62 65  6c 65 74 43 6f 6e 66 69  |nd: KubeletConfi|
	000003f0  67 75 72 61 74 69 6f 6e  0a 6c 6f 67 67 69 6e 67  |guration.logging|
	00000400  3a 0a 20 20 66 6c 75 73  68 46 72 65 71 75 65 6e  |:.  flushFrequen|
	00000410  63 79 3a 20 30 0a 20 20  6f 70 74 69 6f 6e 73 3a  |cy: 0.  options:|
	00000420  0a 20 20 20 20 6a 73 6f  6e 3a 0a 20 20 20 20 20  |.    json:.     |
	00000430  20 69 6e 66 6f 42 75 66  66 65 72 53 69 7a 65 3a  | infoBufferSize:|
	00000440  20 22 30 22 0a 20 20 20  20 74 65 78 74 3a 0a 20  | "0".    text:. |
	00000450  20 20 20 20 20 69 6e 66  6f 42 75 66 66 65 72 53  |     infoBufferS|
	00000460  69 7a 65 3a 20 22 30 22  0a 20 20 76 65 72 62 6f  |ize: "0".  verbo|
	00000470  73 69 74 79 3a 20 30 0a  6d 65 6d 6f 72 79 53 77  |sity: 0.memorySw|
	00000480  61 70 3a 20 7b 7d 0a 6e  6f 64 65 53 74 61 74 75  |ap: {}.nodeStatu|
	00000490  73 52 65 70 6f 72 74 46  72 65 71 75 65 6e 63 79  |sReportFrequency|
	000004a0  3a 20 30 73 0a 6e 6f 64  65 53 74 61 74 75 73 55  |: 0s.nodeStatusU|
	000004b0  70 64 61 74 65 46 72 65  71 75 65 6e 63 79 3a 20  |pdateFrequency: |
	000004c0  30 73 0a 72 6f 74 61 74  65 43 65 72 74 69 66 69  |0s.rotateCertifi|
	000004d0  63 61 74 65 73 3a 20 74  72 75 65 0a 72 75 6e 74  |cates: true.runt|
	000004e0  69 6d 65 52 65 71 75 65  73 74 54 69 6d 65 6f 75  |imeRequestTimeou|
	000004f0  74 3a 20 30 73 0a 73 68  75 74 64 6f 77 6e 47 72  |t: 0s.shutdownGr|
	00000500  61 63 65 50 65 72 69 6f  64 3a 20 30 73 0a 73 68  |acePeriod: 0s.sh|
	00000510  75 74 64 6f 77 6e 47 72  61 63 65 50 65 72 69 6f  |utdownGracePerio|
	00000520  64 43 72 69 74 69 63 61  6c 50 6f 64 73 3a 20 30  |dCriticalPods: 0|
	00000530  73 0a 73 74 61 74 69 63  50 6f 64 50 61 74 68 3a  |s.staticPodPath:|
	00000540  20 2f 65 74 63 2f 6b 75  62 65 72 6e 65 74 65 73  | /etc/kubernetes|
	00000550  2f 6d 61 6e 69 66 65 73  74 73 0a 73 74 72 65 61  |/manifests.strea|
	00000560  6d 69 6e 67 43 6f 6e 6e  65 63 74 69 6f 6e 49 64  |mingConnectionId|
	00000570  6c 65 54 69 6d 65 6f 75  74 3a 20 30 73 0a 73 79  |leTimeout: 0s.sy|
	00000580  6e 63 46 72 65 71 75 65  6e 63 79 3a 20 30 73 0a  |ncFrequency: 0s.|
	00000590  76 6f 6c 75 6d 65 53 74  61 74 73 41 67 67 50 65  |volumeStatsAggPe|
	000005a0  72 69 6f 64 3a 20 30 73  0a 1a 00 22 00           |riod: 0s...".|
 >
I1215 14:50:15.324583     230 initconfiguration.go:114] skip CRI socket detection, fill with the default CRI socket unix:///var/run/containerd/containerd.sock
I1215 14:50:15.324815     230 interface.go:432] Looking for default routes with IPv4 addresses
I1215 14:50:15.324830     230 interface.go:437] Default route transits interface "eth0"
I1215 14:50:15.324975     230 interface.go:209] Interface eth0 is up
I1215 14:50:15.325021     230 interface.go:257] Interface "eth0" has 3 addresses :[172.18.0.3/16 fc00:****::3/64 fe80::****/64].
I1215 14:50:15.325041     230 interface.go:224] Checking addr  172.18.0.3/16.
I1215 14:50:15.325057     230 interface.go:231] IP found 172.18.0.3
I1215 14:50:15.325072     230 interface.go:263] Found valid IPv4 address 172.18.0.3 for interface "eth0".
I1215 14:50:15.325085     230 interface.go:443] Found active IP 172.18.0.3 
I1215 14:50:15.325133     230 common.go:148] WARNING: tolerating control plane version v1.34.0 as a pre-release version
I1215 14:50:15.325168     230 preflight.go:108] [preflight] Running configuration dependant checks
I1215 14:50:15.325179     230 controlplaneprepare.go:225] [download-certs] Skipping certs download
I1215 14:50:15.325485     230 kubelet.go:147] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1215 14:50:15.327017     230 kubelet.go:162] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I1215 14:50:15.327125     230 kubelet.go:178] [kubelet-start] Checking for an existing Node in the cluster with name "5a5adbbfec7a" and status "Ready"
I1215 14:50:15.327178     230 type.go:165] "Request Body" body=""
I1215 14:50:15.327263     230 round_trippers.go:527] "Request" curlCommand=<
	curl -v -XGET  -H "Accept: application/vnd.kubernetes.protobuf,application/json" -H "User-Agent: kubeadm/v1.35.0 (linux/amd64) kubernetes/f35f950" -H "Authorization: Bearer <masked>" 'https://kind-control-plane:6443/api/v1/nodes/5a5adbbfec7a?timeout=10s'
 >
I1215 14:50:15.329075     230 round_trippers.go:632] "Response" verb="GET" url="https://kind-control-plane:6443/api/v1/nodes/5a5adbbfec7a?timeout=10s" status="404 Not Found" headers=<
	Audit-Id: b5dddad2-5b97-4e03-9241-52b266d62df5
	Cache-Control: no-cache, private
	Content-Length: 115
	Content-Type: application/vnd.kubernetes.protobuf
	Date: Mon, 15 Dec 2025 14:50:15 GMT
	X-Kubernetes-Pf-Flowschema-Uid: f54c5d84-c2d8-4cbb-8801-07adb48ce73d
	X-Kubernetes-Pf-Prioritylevel-Uid: 7df0c0a7-2f8a-40ca-9625-0b6635feccd2
 > milliseconds=1 getConnectionMilliseconds=0 serverProcessingMilliseconds=1
I1215 14:50:15.329149     230 type.go:165] "Response Body" body=<
	00000000  6b 38 73 00 0a 0c 0a 02  76 31 12 06 53 74 61 74  |k8s.....v1..Stat|
	00000010  75 73 12 5b 0a 06 0a 00  12 00 1a 00 12 07 46 61  |us.[..........Fa|
	00000020  69 6c 75 72 65 1a 1e 6e  6f 64 65 73 20 22 35 61  |ilure..nodes "5a|
	00000030  35 61 64 62 62 66 65 63  37 61 22 20 6e 6f 74 20  |5adbbfec7a" not |
	00000040  66 6f 75 6e 64 22 08 4e  6f 74 46 6f 75 6e 64 2a  |found".NotFound*|
	00000050  1b 0a 0c 35 61 35 61 64  62 62 66 65 63 37 61 12  |...5a5adbbfec7a.|
	00000060  00 1a 05 6e 6f 64 65 73  28 00 32 00 30 94 03 1a  |...nodes(.2.0...|
	00000070  00 22 00                                          |.".|
 >
I1215 14:50:15.329251     230 kubelet.go:193] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.664771ms
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
I1215 14:50:15.990092     230 loader.go:405] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1215 14:50:15.990751     230 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
I1215 14:50:15.991227     230 loader.go:405] Config loaded from file:  /etc/kubernetes/kubelet.conf

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

So, at this point now.
We’ve answered our second question also.
i.e, can I try to create a kubeadm token manually and use that to join an existing Kubernetes cluster successfully?.
Yes, I can! We saw it above!

But wait, what next now?

We saw the following bit in our output:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

I have more questions now.

  • What is the “Certificate signing request”? And what “response” we received?
  • The Kubelet was informed of the new secure connection details. How?

Let’s try to answer them now.


Q: What is the “Certificate signing request”? And what “response” we received?

So, back to the control-plane node, let’s check the following:

❯ kubectl get certificatesigningrequest -A

NAME        AGE     SIGNERNAME                                    REQUESTOR                        REQUESTEDDURATION   CONDITION
csr-c5j6z   7m46s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:pqrstu          <none>              Approved,Issued
csr-qzbvk   112m    kubernetes.io/kube-apiserver-client-kubelet   system:node:kind-control-plane   <none>              Approved,Issued

OK, so, we see two CertificateSigningRequest (CSR) objects.
One of them was requested by the system:node:kind-control-plane user (our kind-control-plane node).
And the other one was requested by us, through the user system:bootstrap:pqrstu from the docker container node (joining-node).
And both are in condition Approved and Issued.

Let’s also see the body of the CSR object created by our request.

❯ kubectl get certificatesigningrequest csr-c5j6z -o yaml

apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  ...
spec:
  groups:
  - system:bootstrappers
  - system:authenticated
  request: LS0tLS1CRUdJTi****LS0K
  signerName: kubernetes.io/kube-apiserver-client-kubelet
  usages:
  - digital signature
  - client auth
  username: system:bootstrap:pqrstu
status:
  certificate: LS0tLS1CRUdJTiBDR****LS0K
  conditions:
  - ...
    message: Auto approving kubelet client certificate after SubjectAccessReview.
    reason: AutoApproved
    status: "True"
    type: Approved

From the object definition, I understand that the user system:bootstrap:pqrstu, made this CSR request for purposes, usages: {digital signature, client auth}.
And that is AutoApproved (after some process called SubjectAccessReview which I am not exploring in this blog, but I know that is important).

Nice. But what did we receive as response?

I see, we got back a certificate and Approved (with status: "True") in the CSR object’s status section.
So, that means the response we got back is the CSR getting approved. (maybe, I’m still not 100% sure about which response we’re talking).


Q: The Kubelet was informed of the new secure connection details. How?

I do see some relevant logs (truncated to just show the relevant bits):

[kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

Let’s check if we see anything new inside our docker container node we joined from (joining-node).

root@83ab08acf723:/# tree /etc/kubernetes/
/etc/kubernetes/
|-- kubelet.conf
|-- manifests
`-- pki
    `-- ca.crt

3 directories, 2 files

We do.
We now have a new directory called /etcd/kubernetes/ which contains a kubelet.conf file as well as pki/ca.crt.
(same as what the logs pointed us.)

Let’s see the contents of the kubelet.conf file:

root@0ecd1b55abc8:/# cat /etc/kubernetes/kubelet.conf 

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUd****LS0tLS0K
    server: https://kind-control-plane:6443
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
    user: default-auth
  name: default-context
current-context: default-context
kind: Config
users:
- name: default-auth
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

The certificate-authority-data field stores the public certificate of Cluster’s CA (Which is coming from the control-plane of the cluster).
If you decode the value (using echo "LS0tLS1CRUd****LS0tLS0K" | base64 -d, it will match the contents of the /etc/kubernetes/pki/ca.crt on the kind-control-plane node.

We also see the kubelet.conf file points to the location of the kubelet’s certificates & keys - /var/lib/kubelet/pki/.

As I understand (from reading docs):

The kubelet generates its own certificates/keys locally (/var/lib/kubelet/pki/Kubelet.crt | .key) and then send a CSR (Certificate Signing Request) to the API Server.
The control-plane (controller-manager?) then sign that CSR using the cluster CA keys (which we see from the field - certificate-authority-data).
And then signed certificate is written into kubelet.conf (to be precise here - /var/lib/kubelet/pki/kubelet-client-current.pem, which comes from the status.certificate: field of the CSR Object).

The full tree of /var/lib/kubelet/pki/ looks like:

root@fd5e81a7604b:/# tree /var/lib/kubelet/pki/
/var/lib/kubelet/pki/
|-- kubelet-client-2025-12-16-07-35-31.pem
|-- kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2025-12-16-07-35-31.pem
|-- kubelet.crt
`-- kubelet.key

1 directory, 4 files

(Note: The part, “kubelet generate a private key and a CSR for submission to a cluster-level certificate signing process” was originally proposed as part of this design proposal - Kubelet TLS bootstrap)

So, after this point onwards, our (joining-node) node aka the Kubelet has its own set of signed certificates.
And so, the kubeadm bearer token is no longer required.
And all further interactions with the control-plane will use these certificates via mTLS (mutual TLS).

Also, note that - not just certificates, a lot more, was also added to the filesystem of the (joining-node) node.
I don’t understand details about every single listed item, but here’s the full tree:

root@0ecd1b55abc8:/# tree /var/lib/kubelet/
/var/lib/kubelet/
|-- allocated_pods_state
|-- checkpoints
|-- config.yaml
|-- cpu_manager_state
|-- device-plugins
|   `-- kubelet.sock
|-- dra_manager_state
|-- instance-config.yaml
|-- kubeadm-flags.env
|-- memory_manager_state
|-- pki
|   |-- kubelet-client-2025-12-16-10-18-54.pem
|   |-- kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2025-12-16-10-18-54.pem
|   |-- kubelet.crt
|   `-- kubelet.key
|-- plugins
|-- plugins_registry
|-- pod-resources
|   `-- kubelet.sock
`-- pods
    |-- 4683e97e-5735-459f-b965-24a5ffd2f63e
    |   |-- plugins
    |   |   `-- kubernetes.io~empty-dir
    |   |       `-- wrapped_kube-api-access-7v75d
    |   |           `-- ready
    |   `-- volumes
    |       `-- kubernetes.io~projected
    |           `-- kube-api-access-7v75d
    |               |-- ca.crt -> ..data/ca.crt
    |               |-- namespace -> ..data/namespace
    |               `-- token -> ..data/token
    `-- d6d5a0a6-4c4a-4cc0-984f-a63129fbefca
        |-- plugins
        |   `-- kubernetes.io~empty-dir
        |       |-- wrapped_kube-api-access-c4l5m
        |       |   `-- ready
        |       `-- wrapped_kube-proxy
        |           `-- ready
        `-- volumes
            |-- kubernetes.io~configmap
            |   `-- kube-proxy
            |       |-- config.conf -> ..data/config.conf
            |       `-- kubeconfig.conf -> ..data/kubeconfig.conf
            `-- kubernetes.io~projected
                `-- kube-api-access-c4l5m
                    |-- ca.crt -> ..data/ca.crt
                    |-- namespace -> ..data/namespace
                    `-- token -> ..data/token

25 directories, 24 files

Q: why there’s a symmetric token used by kubeadm?

OK, I already feel I learnt quite a bit.

Yet, our very first question is still not answered.
i.e., why there’s a symmetric token used by kubeadm?

I was actually having a chat with the creator of Kubeadm himself, Lucas Käldström (yes, the same person who wrote the above linked thesis).

What I learnt is - even though it’s a symmetric and shared string, the token itself has two parts, where the first part (token-id) is supposed to be treated as public entity and the second part (token-secret) to be treated as a private entity.

Becuase, Kubeadm tokens are to be used for establishing bidirectional trust between the client (in our case, joining-node) and the server (the control-plane, api-server).

  • For the client (joining-node) to establish trust to the server (the control-plane, api-server), we saw the first part (token-id) of the token is used (system:bootstrap:pqrstu and the matching secret and clusterrolebinding).
  • For the server (the control-plane, api-server) to establish trust to the client (joining-node), the entire shared token (both token-id and token-secret) can be used.

    If you look at the full kubeadm join ... logs again, one of the early steps you will see is - attempting to download the KubeletConfiguration from ConfigMap "kubelet-config".

    This token (stored as the Secret object in the control-plane aka the server side) can be used to sign the ConfigMap (“kubelet-config”).

    And then on the client side (joining-node), the received signed ConfigMap can then be authenticated by using the same shared token.

    The process is explained here briefly - ConfigMap Signing,
    but the important part is - You can verify the JWS (signature) using the HS256 scheme (HMAC-SHA256) with the full token (e.g. 07401b.f395accd246ae52d) as the shared secret.

So, even though it’s a symmetric and a shared token, but it has similarities to the assymetric key pairs with its public and private entities used for separate purposes.

There’s also this design proposal called, bootstrap discovery, which discusses and infact proposed the flow we see in our logs.

Finally, I will end this post with this diagram which atleast for me, summarises nicely, the entire process we followed from start to end so far:

New node
  │
  │ kubeadm join
  │
  ▼
API Server (unauthenticated)
  │
  │ token-based auth
  ▼
CSR created
  │
  │ CertificateSigningRequest
  ▼
Controller approves CSR
  │
  │ signed by CA
  ▼
kubelet gets client cert
  │
  │ mTLS from now on
  ▼
FULLY TRUSTED NODE

With that, thank you for reading so far.

And I hope you also got to learn a few new things. o/


PS:

Even though the node was able to join the control-plane, but it wasn’t really in a READY state (and I left it there, didn’t troubleshoot it further).

The Kubelet logs read something like:

failed to mount rootfs component: mount source "overlay" ... err: invalid argument
December 15, 2025 12:00 AM

Johnnycanencrypt 0.17.0 released

A few weeks ago I released Johnnycanencrypt 0.17.0. It is a Python module written in Rust, which provides OpenPGP functionality including allows usage of Yubikey 4/5 as smartcards.

Added

  • Adds verify_userpin and verify_adminpin functions. #186

Fixed

  • #176 updates kushal's public key and tests.
  • #177 uses sequoia-openpgp 1.22.0
  • #178 uses scriv for changelog
  • #181 updates pyo3 to 0.27.1
  • #42, we now have only acceptable expect calls and no unwrap calls.
  • Removes cargo clippy warnings.

The build system now moved back to maturin. I managed to clean up CI, and now testing properly in all 3 platforms (Linux, Mac, Windows). Till this release I had to manually test the smartcard functionalities by connecting a Yubikey in Linux/Mac systems, but that will change for the future releases. More details will come out soon :)

December 14, 2025 08:16 AM

What happens when Kind doesn't have enough IP(s)?

I wanted to write a quick blog to document a tiny experiment I ran last week.
Just dumping my rough notes as it is.

What I want to test is a scenario of creating a (Kind) cluster when it doesn’t have enough IP addresses, to assign internally etc.
Meaning I try to create a Kind cluster and give it only a docker bridge network with 20 or 50 IP addresses (basically a very tiny pool of IP(s)).

Actually, now that I think more, 20-50 IP(s) are actually too much for my experiment.
Because the docker bridge IP pool will only be used for assigning IP(s) to the kind nodes (the control-plane and worker nodes).
Inside these kind nodes - for the Pods and for the Containers IP(s), the node will configure its own pool of private IP addresses, so that doesn’t come from the docker bridge network from the host.

Therefore, the flow is roughly like:

  • (my host network) sets aside a little set of private IP for the docker bridge ->
  • (then docker bridge network) assigns an IP to a node ->
  • (node internal network) which assigns IP(s) to pods and containers

With this understanding now, I feel I should create a docker bridge network with even less IP(s), lets say 5 IP(s).
And with this newly created docker bridge network, if I try to create a Kind cluster with 5 or more nodes, atleast 1 node will never get an IP.
And that is exactly the behavior I want to test.

I also know out of these 5 IP(s) -

  • one will be used as gateway, so that is gone,
  • and then the rest 4 will be assigned to 4 nodes

(Note: this above understanding is incomplete right now, will be fixed later in the post.)

So, with that mathematics done now, let’s run the experiment.


First off, create a Kind cluster:

cat kind-config.yaml 
# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  disableDefaultCNI: true
nodes:
  - role: control-plane
  - role: worker
  - role: worker
  - role: worker
  - role: worker


❯ kind create cluster --name ip-test --retain --config kind-config.yaml
Creating cluster "ip-test" ...
 ✓ Ensuring node image (kindest/node:v1.34.0) 🖼
 ✓ Preparing nodes 📦 📦 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-ip-test"
You can now use your cluster with:

kubectl cluster-info --context kind-ip-test

Have a nice day! 👋

Number one

To create a new docker bridge network, I need to run the following command:

❯ docker network create --driver bridge  \
    --subnet 172.20.0.0/29  \
    --gateway 172.20.0.1 \
    --aux-address "reserved1=172.20.0.6" \
    kind-small-net

Notice the flag I passed, --subnet 172.20.0.0/29.
This translates to a private IP network 172.20.0.0 with a subnet mask of 255.255.255.248/29.
This subnet mask will give me exactly 8 IP addresses.
(and that’s the closest I can get to making an IP pool with exact 5 usable IP(s)).

So, how these 8 IP(s) will be used, is explained below:

  • 172.20.0.0 = network (not usable)
  • 172.20.0.1 → commonly used as gateway (Docker sets a gateway)
  • 172.20.0.2 – 172.20.0..6 = usable host addresses (that’s 5 addresses here)
  • 172.20.0.7 = broadcast (not usable)

Also notice, that I restricted one of the available 5 IP addresses using the flag, --aux-address "reserved1=172.20.0.6" to create an IP contrained scenario.


Number two

Kind always create a default docker network bridge with name “kind” automatically.

So, I can try to create a new docker network bridge (kind-small-net) with constrained IP pool like we did above.
And then try to connect existing Kind cluster nodes to this newly created bridge network.

Like following:

for c in $(docker ps --filter "name=ip-test" -q); do   docker network connect kind-small-net $c; done
Error response from daemon: no available IPv4 addresses on this network's address pools: kind-small-net (15f712efffba69730048b9f826e3e68702646a37df4d09c86288fca47a5a52f6)

Notice, I got an error - no available IPv4 addresses on this network's address pools.
So far, everything is going as expected.

Now, let’s check if anything happened to the cluster:

❯ kubectl get nodes -o wide
NAME                    STATUS     ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
ip-test-control-plane   NotReady   control-plane   9m4s    v1.34.0   172.18.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker          NotReady   <none>          8m48s   v1.34.0   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker2         NotReady   <none>          8m49s   v1.34.0   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker3         NotReady   <none>          8m49s   v1.34.0   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker4         NotReady   <none>          8m49s   v1.34.0   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3


❯ docker container ps -a
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                       NAMES
aca02654b39a   kindest/node:v1.34.0   "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes                               ip-test-worker
a7da65a5a3f7   kindest/node:v1.34.0   "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes                               ip-test-worker4
4605b3acc669   kindest/node:v1.34.0   "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes                               ip-test-worker3
856fbf8aec0d   kindest/node:v1.34.0   "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes   127.0.0.1:46031->6443/tcp   ip-test-control-plane
fcb796be89c8   kindest/node:v1.34.0   "/usr/local/bin/entr…"   13 minutes ago   Up 13 minutes                               ip-test-worker2

NO! All nodes still has an IP assigned to them.
And none of these assigned IP(s) are from our newly created network bridge (atleast not in the kubectl output).
(And yes, I also see the NotReady status, so that is something.)

What’s going on? Let’s inspect both the network bridges:

❯ docker network inspect kind

        "Name": "kind",
         ...
        "Scope": "local",
        "Driver": "bridge",
         ...
        "IPAM": {
            ...
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                },
            ]
        },
        ...
        "Containers": {
            "4605b3acc6699173bd975a3d6b74d25e688eecfce644962bd7fb26c50d42f890": {
                "Name": "ip-test-worker3",
                ...
                "IPv4Address": "172.18.0.3/16",
            },
            "856fbf8aec0d287fea50ce9255260a22a1f707e12ea34aac313e3b356ffc3d8d": {
                "Name": "ip-test-control-plane",
                ...
                "IPv4Address": "172.18.0.6/16",
            },
            "a7da65a5a3f7dc78947e33d8f797854c47630adbee68ea30a46590a5238862ac": {
                "Name": "ip-test-worker4",
                ...
                "IPv4Address": "172.18.0.5/16",
            },
            "aca02654b39a931e9d27e313f61a78719eb389cb008e86f078c184fac2bae4e7": {
                "Name": "ip-test-worker",
                ...
                "IPv4Address": "172.18.0.2/16",
            },
            "fcb796be89c800e6e6107026ab04f448b9efa889957c10253f181bd63fa88075": {
                "Name": "ip-test-worker2",
                ...
                "IPv4Address": "172.18.0.4/16",
            }
        },

and

❯ docker network inspect kind-small-net

        "Name": "kind-small-net",
         ...
        "Scope": "local",
        "Driver": "bridge",
         ...
        "IPAM": {
            ...
            "Config": [
                {
                    "Subnet": "172.20.0.0/29",
                    "Gateway": "172.20.0.1",
                    "AuxiliaryAddresses": {
                        "reserved1": "172.20.0.6"
                    }
                }
            ]
        },
        ...
        "Containers": {
            "4605b3acc6699173bd975a3d6b74d25e688eecfce644962bd7fb26c50d42f890": {
                "Name": "ip-test-worker3",
                ...
                "IPv4Address": "172.20.0.4/29",
            },
            "856fbf8aec0d287fea50ce9255260a22a1f707e12ea34aac313e3b356ffc3d8d": {
                "Name": "ip-test-control-plane",
                ...
                "IPv4Address": "172.20.0.5/29",
            },
            "a7da65a5a3f7dc78947e33d8f797854c47630adbee68ea30a46590a5238862ac": {
                "Name": "ip-test-worker4",
                ...
                "IPv4Address": "172.20.0.3/29",
            },
            "aca02654b39a931e9d27e313f61a78719eb389cb008e86f078c184fac2bae4e7": {
                "Name": "ip-test-worker",
                ...
                "IPv4Address": "172.20.0.2/29",
            }
        },

OK, so, 4 out of the 5 nodes (1 control-plane + 3 workers) are assigned an IP from the new kind-small-net network.

But the entire set (1 control-plane + 4 workers) are still assigned an IP from the default Kind network.

Let’s try one more thing.
Just like the docker network connect command, there’s also a command to disconnect containers from a network as well.
Let’s run that as well and see if that makes the Kind cluster nodes fall back to the new kind-small-net network IP(s).

for c in $(docker ps --filter "name=ip-test" -q); do   docker network disconnect kind $c; done

❯ docker network inspect kind

        "Name": "kind",
        ...
        "Scope": "local",
        "Driver": "bridge",
        ...
        "IPAM": {
            ...
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                },
                ...
            ]
        },
        ...
        "Containers": {},
        ...


❯ kubectl get nodes -o wide
NAME                    STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
ip-test-control-plane   NotReady   control-plane   28m   v1.34.0   172.18.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker          NotReady   <none>          28m   v1.34.0   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker2         NotReady   <none>          28m   v1.34.0   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker3         NotReady   <none>          28m   v1.34.0   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker4         NotReady   <none>          28m   v1.34.0   172.18.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3

❯ for c in $(docker ps --filter "name=ip-test" -q); do   docker network disconnect kind $c; done
Error response from daemon: container aca02654b39a931e9d27e313f61a78719eb389cb008e86f078c184fac2bae4e7 is not connected to network kind
Error response from daemon: container a7da65a5a3f7dc78947e33d8f797854c47630adbee68ea30a46590a5238862ac is not connected to network kind
Error response from daemon: container 4605b3acc6699173bd975a3d6b74d25e688eecfce644962bd7fb26c50d42f890 is not connected to network kind
Error response from daemon: container 856fbf8aec0d287fea50ce9255260a22a1f707e12ea34aac313e3b356ffc3d8d is not connected to network kind
Error response from daemon: container fcb796be89c800e6e6107026ab04f448b9efa889957c10253f181bd63fa88075 is not connected to network kind

Ok! So, all Kind nodes are indeed disconnected from the default “Kind” network now.
But still, they have an IP assigned from the old “Kind” network only (i.e, it didn’t fall back to the newly created bridge “kind-small-net”).

So, looks like Kind only look for the default automatically created docker bridge network (“Kind”) for its cluster configuration.

And therefore, regardless of me creating a new docker network bridge and attaching existing Kind node containers to that, Kind will always assign IP(s) from this default network bridge to the kind nodes.
And so, no IP exhaustion scenaio will happen.


Number three

If I still want the Kind cluster to use a custom IP pool, the way to do that is:

  • delete the existing docker network bridge, with name “Kind” (if one is existing), and
  • recreate a new one manually, with the same name “Kind”, with the required custom tiny IP pool I need.

Like following:

❯ docker network rm kind
kind

❯ docker network inspect kind
[]
Error response from daemon: network kind not found


❯ docker network create --driver bridge  \
    --subnet 172.20.0.0/29  \
    --gateway 172.20.0.1 \
    --aux-address "reserved1=172.20.0.6" \
    kind

f64a9d47cf585e9e61c3d25da2b3d3684f02b633b53ee7c053a60c2da0eafd84

❯ docker network inspect kind

        "Name": "kind",
        "Id": "f64a9d47cf585e9e61c3d25da2b3d3684f02b633b53ee7c053a60c2da0eafd84",
        "Created": "2025-12-16T18:35:37.818562813+05:30",
        "Scope": "local",
        "Driver": "bridge",
        ...
        "IPAM": {
            ...
            "Config": [
                {
                    "Subnet": "172.20.0.0/29",
                    "Gateway": "172.20.0.1",
                    "AuxiliaryAddresses": {
                        "reserved1": "172.20.0.6"
                    }
                }
            ]
        },
        ...
        "ConfigOnly": false,
        "Containers": {},
        ...

And with the required IP constrained docker bridge network with name “kind” in place, let’s create the Kind cluster as following:

❯ kind create cluster --name ip-test --retain --config kind-config.yaml
Creating cluster "ip-test" ...
 ✓ Ensuring node image (kindest/node:v1.34.0) 🖼
 ✗ Preparing nodes 📦 📦 📦 📦 📦  
ERROR: failed to create cluster: command "docker run --name ip-test-worker2 --hostname ip-test-worker2 --label io.x-k8s.kind.role=worker --privileged --security-opt seccomp=unconfined --security-opt apparmor=unconfined --tmpfs /tmp --tmpfs /run --volume /var --volume /lib/modules:/lib/modules:ro -e KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER --detach --tty --label io.x-k8s.kind.cluster=ip-test --net kind --restart=on-failure:1 --init=false --cgroupns=private --volume /dev/mapper:/dev/mapper kindest/node:v1.34.0@sha256:7416a61b42b1662ca6ca89f02028ac133a309a2a30ba309614e8ec94d976dc5a" failed with error: exit status 125
Command Output: a14eea4647334a84e95142893a735d1cf97bcb74def2793de4a6653c6f187cc9
docker: Error response from daemon: failed to set up container networking: no available IPv4 addresses on this network's address pools: kind (f64a9d47cf585e9e61c3d25da2b3d3684f02b633b53ee7c053a60c2da0eafd84)

Run 'docker run --help' for more information

OK, we managed to get the scenario working.
This time, the cluster failed at the bootstrap time only, with the expected error:

failed to set up container networking: no available IPv4 addresses on this network's address pools: kind

And once again, docker network inspect also shows that node ip-test-worker2 was the one which failed to get an IP from the pool.
Plus, the cluster is not responding.

❯ docker network inspect kind

        "Name": "kind",
        "Id": "f64a9d47cf585e9e61c3d25da2b3d3684f02b633b53ee7c053a60c2da0eafd84",
        "Created": "2025-12-16T18:35:37.818562813+05:30",
        "Scope": "local",
        "Driver": "bridge",
        ...
        "IPAM": {
            ...
            "Config": [
                {
                    "Subnet": "172.20.0.0/29",
                    "Gateway": "172.20.0.1",
                    "AuxiliaryAddresses": {
                        "reserved1": "172.20.0.6"
                    }
                }
            ]
        },
        ...
        "Containers": {
            "2230663632be9887858ac1037b1f01ec856122bf5ab02e6acf9188c6bfb12b32": {
                "Name": "ip-test-worker3",
                ...
                "IPv4Address": "172.20.0.3/29",
            },
            "6be8bb30074055a2049ccb50d066f0b1cd9cf62243f3d3c619b16c0e555dcf80": {
                "Name": "ip-test-control-plane",
                ...
                "IPv4Address": "172.20.0.4/29",
            },
            "7c545d00e21bbea07ed9a22226c80753acf09d3ed574914e711a8ddc67847013": {
                "Name": "ip-test-worker4",
                ...
                "IPv4Address": "172.20.0.2/29",
            },
            "f7b35fb4319944ba1ef7fb426e1708bac1bad0601b29d6cd1f43a2a4acb41233": {
                "Name": "ip-test-worker",
                ...
                "IPv4Address": "172.20.0.5/29",
            }

❯ kubectl get nodes
E1216 18:40:11.021156  144524 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?


Number four

When I ran the “kind create cluster” command above, I learnt there’s a flag called --retain that will retain the nodes (the respective docker containers) even if cluster bootstrap fails, for debugging purposes.

❯ docker container ps -a
CONTAINER ID   IMAGE                  COMMAND                  CREATED         STATUS         PORTS                       NAMES
a14eea464733   kindest/node:v1.34.0   "/usr/local/bin/entr…"   6 minutes ago   Created                                    ip-test-worker2
2230663632be   kindest/node:v1.34.0   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes                               ip-test-worker3
6be8bb300740   kindest/node:v1.34.0   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes   127.0.0.1:38589->6443/tcp   ip-test-control-plane
7c545d00e21b   kindest/node:v1.34.0   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes                               ip-test-worker4
f7b35fb43199   kindest/node:v1.34.0   "/usr/local/bin/entr…"   6 minutes ago   Up 6 minutes                               ip-test-worker

❯ docker exec -it ip-test-control-plane /bin/bash
root@ip-test-control-plane:/# exitfor c in $(docker ps -a --filter "name=ip-test" -q); do   docker inspect --format ' ' "$c"; done
/ip-test-worker2 
/ip-test-worker3 172.20.0.3
/ip-test-control-plane 172.20.0.4
/ip-test-worker4 172.20.0.2
/ip-test-worker 172.20.0.5

We can see that all containers have an IP assigned to them from our new custom “kind” network bridge, but not the ip-test-worker2.


Number five

Ok, let’s finish it with fixing our cluster.

Let’s recreate the “kind” network bridge.
It is going to be a custom network even now, but let’s remove the restriction for that last available and usable IP address.

❯ docker network create --driver bridge      --subnet 172.20.0.0/29      --gateway 172.20.0.1    kind
0272f4e1a9d33eaf31a77a5aec2dece1cf95098345fb1b0bbbcb825901af0c2b

❯ kind create cluster --name ip-test --retain --config kind-config.yaml
Creating cluster "ip-test" ...
 ✓ Ensuring node image (kindest/node:v1.34.0) 🖼
 ✓ Preparing nodes 📦 📦 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-ip-test"
You can now use your cluster with:

kubectl cluster-info --context kind-ip-test

Have a nice day! 👋

❯ kubectl get nodes -o wide
NAME                    STATUS     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION     CONTAINER-RUNTIME
ip-test-control-plane   NotReady   control-plane   41s   v1.34.0   172.20.0.6    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker          NotReady   <none>          26s   v1.34.0   172.20.0.5    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker2         NotReady   <none>          26s   v1.34.0   172.20.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker3         NotReady   <none>          26s   v1.34.0   172.20.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3
ip-test-worker4         NotReady   <none>          26s   v1.34.0   172.20.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.17.9-1-default   containerd://2.1.3

Done! We have the nodes created with IP addresses assigned from the new custom “kind” network pool.

I know the state of the nodes are NotReady, but that’s not part of this experiment.
(updated later - I know the reason why all nodes stayed in NotReady state. Because I only used a kind-config that disabled default CNI setup 🤦‍♀️. Anyway, removing this should fix it.)

networking:
  disableDefaultCNI: true

Next, I want to try is to see what happens when I constrain “PodCIDR” and “ServiceCIDR” pools. o/

December 06, 2025 12:00 AM

2025

What do all the stars and daggers after the book titles mean?


Note to self, for this year: Read less, write more notes. Abandon more books.

January

  1. Murder at the Vicarage, Agatha Christie*
  2. The Body in the Library, Agatha Christie*
  3. The Moving Finger, Agatha Christie*
  4. Sleeping Murder, Agatha Christie*
  5. A Murder Is Announced, Agatha Christie*
  6. They Do It with Mirrors, Agatha Christie*
  7. My Horrible Career, John Arundel*
  8. The Veiled Lodger, Sherlock & Co. Podcast*
  9. Hardcore History, Mania for Subjugation II, Episode 72*
  10. A Pocket Full of Rye, Agatha Christie*
  11. 4.50 from Paddington, Agatha Christie*
  12. The Mirror Crack’d From Side to Side, Agatha Christie*
  13. As You Wish: Inconceivable Tales from the Making of The Princess Bride, Cary Elwes & Joe Layden*
  14. A Caribbean Mystery, Agatha Christie*
  15. At Bertram’s Hotel, Agatha Christie*
  16. Nemesis, Agatha Christie*
  17. Miss Marple’s Final Cases, Agatha Christie*

February

  1. A Shadow in Summer, Daniel Abraham*
  2. Black Peter, Sherlock & Co. Podcast, Season 25*
  3. On Writing with Brandon Sanderson, Episodes 1-4, Brandon Sanderson*
  4. A Betrayal in Winter, Daniel Abraham*
  5. I Will Judge You by Your Bookshelf, Grant Snider*
  6. The Art of Living, Grant Snider*
  7. The Shape of Ideas, Grant Snider*
  8. For the Love of Go, John Arundel*
  9. Powerful Command-Line Applications in Go, Ricardo Gerardi*
  10. Learning Go, Jon Bodner*
  11. An Autumn War, Daniel Abraham*
  12. The Price of Spring, Daniel Abraham*
  13. Math for English Majors, Ben Orlin (Notes)*
  14. Empire Podcast, The Three Kings, Episodes 212–214*#

March

  1. Companion to the Count, Melissa Kendall*
  2. Wisteria Lodge, Sherlock & Co. Podcast, Season 26*
  3. A Story of Love, Minerva Spencer*
  4. The Etiquette of Love, Minerva Spencer*
  5. A Very Bellamy Christmas, Minerva Spencer*
  6. Empire Podcast, The Rise and Fall of the Mughal Empire, Episodes 205–211, 215-222*#
  7. Empire Podcast, Britain’s Last Colony, Episodes 229-230*#
  8. Head First Java (3rd edition), Kathy Sierra, Bert Bates & Trisha Gee*
  9. Head First Go, Jay McGavren*
  10. The Rest is History, The French Revolution (Part II), Episodes 503–507*#
  11. The Rest is History, The French Revolution (Part III), Episodes 544-547*#
  12. Morris Chang & TSMC, Spring 2025, Episode 1, Acquired Podcast*#
  13. Rolex, Spring 2025, Episode 2, Acquired Podcast*#
  14. Head First C, Dawn Griffiths & David Griffiths*

April

  1. Head First Learn to Code, Eric Freeman*
  2. On Writing with Brandon Sanderson, Episodes 4.5-8, Brandon Sanderson*
  3. Spellfire Thief, Sarah Hawke*
  4. Thinking About Thinking, Grant Snider*
  5. The Disappearance of Lady Frances Carfax, Sherlock & Co. Podcast, Season 28*
  6. Deep Questions, Cal Newport, Episodes 01-10*#
  7. Deep Questions, Cal Newport, Episodes 11-20*#
  8. Unlovable, Darren Hayes*

May

  1. Deep Questions, Cal Newport, Episodes 21-30*#
  2. Dick Barton and the Secret Weapon, Edward J Mason*#
  3. Dick Barton and the Paris Adventure, Edward J Mason*#
  4. Dick Barton and the Cabatolin Diamonds, Edward J Mason*#
  5. Kill the Pharaoh, Victor Pemberton*#
  6. On Writing with Brandon Sanderson, Episodes 8-12, Brandon Sanderson*
  7. Deep Questions, Cal Newport, Episodes 31-40*#
  8. Trial & Error (The Hardy Boys), Franklin W. Dixon
  9. Understanding APIs and RESTful APIs Crash Course, Kalob Taulien (Udemy)*#
  10. System Collapse, Martha Wells*
  11. Deep Questions, Cal Newport, Episodes 31-40*#
  12. A Sham Engagement, Fil Reid
  13. A Hint of Scandar, Fil Reid
  14. Gideon the Ninth, Tamsyn Muir*#
  15. Deep Questions, Cal Newport, Episodes 41-50*#
  16. Deep Questions, Cal Newport, Episodes 51-60*#
  17. Harrow the Ninth, Tamsyn Muir*#

June

  1. Deep Questions, Cal Newport, Episodes 61-70*#
  2. Deep Questions, Cal Newport, Episodes 71-80*#
  3. Apple in China, Patrick McGee*
  4. Dreaming of Elisabeth, Camilla Lackberg*
  5. An Elegant Death, Camilla Lackberg*
  6. Steve Ballmer, Summer 2025, Episode 1, Acquired Podcast#
  7. Deep Questions, Cal Newport, Episodes 81-90*#
  8. The Rest is History, Warlords of the West, The Rise and Fall of the Franks, Episodes 520-525*#
  9. The Rest is History, Heart of Darkness, Horror in the Congo, Episodes 538-541*#
  10. Antifragile, Nassim Nicholas Taleb*
  11. A Man and a Woman, Robin Schone*
  12. Deep Questions, Cal Newport, Episodes 91-100*#
  13. A Scandal in Bohemia, Sherlock & Co. Podcast, Season 30*
  14. How to Read a Book, Mortimer J. Adler*
  15. The Secret Rules of the Terminal, Julia Evans*
  16. The Lover, Robin Schone
  17. Slide:ology, Nancy Duarte*

July

  1. Deep Questions, Cal Newport, Episodes 101-110*#
  2. Deep Questions, Cal Newport, Episodes 111-120*#
  3. The Rest is History, 1066: The Norman Conquest of England, Episodes 548-557*#
  4. Emacs Writing Studio, Peter Prevos*
  5. Deep Questions, Cal Newport, Episodes 121-130*#
  6. Empire Podcast, The History of Ireland, Episodes 231-246*#
  7. The Priory School, Sherlock & Co. Podcast, Season 32*
  8. The Adventures of Johnny Bunko: The Last Career Guide You’ll Ever Need, Daniel H. Pink*
  9. The Sketchnote Handbook, Mike Rohde*
  10. Business Etiquette, Ann Marie Sabath*
  11. Dare to Tempt an Earl This Spring, Sara Adrien & Tanya Wilde*
  12. How to Lose Prince This Summer, Sara Adrien & Tanya Wilde*
  13. Empire Podcast, Victorian Narcos (The Opium Wars), Episodes 248-255*#
  14. Deep Questions, Cal Newport, Episodes 131-140*#
  15. Lost Islamic History, Firas Alkhateeb*
  16. Empire Podcast, Canada, Episodes 267-272*#
  17. Deep Questions, Cal Newport, Episodes 141-150*#
  18. 100 Tricks to Appear Smart in Meetings, Sarah Cooper*

August

  1. Empire Podcast, The Panama Canal, Episodes 273-277*#
  2. Deep Questions, Cal Newport, Episodes 151-160*#
  3. Dick Barton and the Smash and Grab Raiders, Edward J Mason*#
  4. Deep Questions, Cal Newport, Episodes 161-170*#
  5. Deep Questions, Cal Newport, Episodes 171-180*#
  6. Empire Podcast, Partitions (The Breakup of the Britisth Indian Empire), Episodes 278-283*#
  7. I Do Everything I’m Told, Megan Fernandes*
  8. Deep Questions, Cal Newport, Episodes 181-190*#

September

  1. Deep Questions, Cal Newport, Episodes 191-200*#
  2. Deep Questions, Cal Newport, Episodes 201-210*#
  3. Deep Questions, Cal Newport, Episodes 211-220*#
  4. 300, Frank Miller*
  5. V for Vendetta, Alan Moore*
  6. The Killing Joke, Frank Miller*
  7. Watchmen, Alan Moore*
  8. Nona the Ninth, Tamsyn Muir*#
  9. Deep Questions, Cal Newport, Episodes 221-240*#
  10. Empire Podcast, The Suez Crisis, Episodes 284-288*#
  11. Empire Podcast, The Cholas, Episodes 289-290*#
  12. Vivian Maier: Street Photographer, John Maloof*
  13. Vivian Maier: Out of the Shawdows, Richard Cahn & Michael Williams*
  14. Mastering Fujifilm Camera Menus, Joshua Chard*#
  15. Learn Manual Mode Photography in Under 1 Hour!, David Eastwell*#
  16. Street Photography Masterclass, Adam Tan*#

October

  1. Deep Questions, Cal Newport, Episodes 241-250*#
  2. The Rest is History, The First World War (1914–15), Episodes 594-599*#
  3. Picture Perfect Posing, Roberto Valenzuela*
  4. Picture Perfect Practice, Roberto Valenzuela*
  5. Alien Overlords Series, Theodora Taylor & Eve Vaughn
  6. The Rest is History, Greek Myths, Episodes 602-605*#
  7. Deep Questions, Cal Newport, Episodes 251-270*#
  8. Deep Questions, Cal Newport, Episodes 271-290*#
  9. Abbey Grange, Sherlock & Co. Podcast, Season 33*
  10. The Mazarin Stone, Sherlock & Co. Podcast, Season 34*
  11. The Missing Three-Quarter, Sherlock & Co. Podcast, Season 35*
  12. The Rest is History, Greek Myths, Episodes 602-605*#
  13. Talk Python in Production, Michael Kennedy *
  14. Deep Questions, Cal Newport, Episodes 291-310*#
  15. Classics of British Literature, John Sutherland (The Great Courses) Lectures 1-10*#
  16. Bootstrapping Microservices, Ashley Davis*
  17. The Book of Kubernetes, Alan Hohn*
  18. Google, Summer 2025, Episodes 2 & 4 and Fall 2025, Episode 1, Acquired Podcast#
  19. Deep Questions, Cal Newport, Episodes 311-330*#
  20. Empire Podcast, Gaza! The History, Episodes 291-301*#
  21. The Count of Monte Cristo, Alexandre Dumas*

November

  1. Kathryn, Minerva Spencer*
  2. Deep Questions, Cal Newport, Episodes 331-350*#
  3. Classics of British Literature, John Sutherland (The Great Courses) Lectures 11-31*#
  4. Trader Joe’s, Fall 2025, Episode 2, Acquired Podcast#
  5. GitOps, Florian Beetz, Anja Kammer & Simon Harrer*
  6. GitOps Cookbook, Natale Vinto & Alex Soto Bueno*
  7. GitOps and Kubernetes, Yuen, Matyshentsev, Ekenstam & Suen*
  8. Argo CD: Up & Running, Andrew Block & Christian Hernandez*
  9. Leveraging Kustomize for Kubernetes Manifests, Brent Laster*
  10. Deep Questions, Cal Newport, Episodes 331-350*#
  11. Classics of British Literature, John Sutherland (The Great Courses) Lectures 32-49*#
  12. Learning Helm, Butcher, Farina & Dolitsky*
  13. Kubernetes Up and Running, Burns, Beda & Hightower*
  14. Kubernetes Cookbook, Naik, Goasguen & Michaux*
  15. Deep Questions, Cal Newport, Episodes 351-370*#
  16. Wisdom Takes Work, Ryan Holiday*
  17. Right Thing, Right Now, Ryan Holiday*
  18. Soul Harvest (Dread Knight #2), Sarah Hawke*
  19. Dark Covenant (Dread Knight #3), Sarah Hawke*
  20. Rebirth (Dread Knight #4), Sarah Hawke*
  21. Talk of the Town, Jerry Pinto & Rahul Srivastava*
  22. The Penguin Classics Book, Henry Eliot*
  23. Flux CD for Absolute Beginners, Yogesh Raheja*#
  24. Wide Angle Photography, Chris Marquardt*
  25. A Wanton Adventure, Ramona Elmes*
  26. A Translation of Desire, Ramona Elmes*
  27. The Psychology of Human Misjudgment, Charles T. Munger*
  28. Laird’s Curse, Katy Baker*
  29. Redemption of a Rakehell, April Moran*
  30. The Complete Yes Minister, Jonathan Lynn & Antony Jay*

December

  1. The Complete Yes Prime Minister, Jonathan Lynn & Antony Jay*
  2. Deep Questions, Cal Newport, Episodes 371-380*#
  3. Confessions of a Rakehell, April Moran*
  4. Traefik API Gateway for Microservices: With Java and Python Microservices Deployed in Kubernetes, Rahul Sharma & Akshay Mathur *
  5. The Hound of the Baskervilles, Sherlock & Co. Podcast, Season 36*
  6. Coca Cola, Fall 2025, Episode 3, Acquired Podcast#
  7. The Way of Kings, Brandon Sanderson*
  8. Words of Radiance, Brandon Sanderson*
  9. Edgedancer, Brandon Sanderson*

December 01, 2025 12:15 AM

Git Worktree

A few days ago, during our office knowledge-sharing meeting, someone introduced the git worktree command. It lets you create a new branch parallel to your current working branch so you can start something new — or handle a hotfix — without stashing or committing your unfinished changes.

It turned out to be incredibly useful. With git worktree, you can maintain multiple working directories linked to the same Git repository with almost no friction.


Why use worktrees?

Imagine you're working on a long-running feature — say, an optimization — and suddenly you’re assigned an urgent production bug. Typically, you would stash your changes or make a temporary commit, switch branches, fix the bug, then restore everything. It's annoying and error-prone.

With worktrees, you can directly spin up a parallel working directory:

git worktree add <path>
# Example:
git worktree add ../hotfix

This creates a new linked worktree, associated with your current repository, with its own metadata and branch checkout. Your original work remains untouched.


Removing a worktree

Once you're done with the hotfix (or any task), removing the worktree is just as simple:

git worktree remove <path>

If you delete the directory manually, Git will eventually clean up its administrative files automatically (based on gc.worktreePruneExpire in git-config). You can also remove stale entries explicitly:

git worktree prune

Other useful worktree commands

1. Create a throwaway worktree (detached HEAD)

Perfect for quick experiments:

git worktree add -d <path>

2. Create a worktree for an existing branch

git worktree add <path> <branch>

This checks out the given branch into a new, isolated working directory.


Further reading

To dive deeper into git worktree:

  git help worktree

Cheers!

#Git #TIL #Worktree

November 30, 2025 01:27 PM

Moved From Reeder to Readkit



I used Reeder for reading all my RSS feeds ever since the app launched over fifteen years ago. RSS is how I keep up with everything in the world outside and having a pleasant reading experience is very important to me.

It no longer serves that need for me.
For a long time, the way the app worked aligned with the way I want to read. Earlier this year though, the developer decided to take the app in a different direction, with a different reading experience. The older app is still available as Reeder Classic, but only a few months of use have shown me that the app is basically abandoned. The attention to detail is obviously now being applied to the new app.

Enter ReadKit.
I had used it briefly during the Google Reader apocalypse when every feed reader was scrambling to find new backends to sync to. Reeder similarly had switched to local only mode and was taking a while before it supported other services.
ReadKit in the meanwhile already had support for Feedwrangler, and so I switched to it until Reeder came back to speed.

And I’ve switched to it for the foreseeable future.
It looks beautiful!
It does everything I want, shows everything the way and want and behaves just the way I want it to. The only knock I have against it, is that it does not feel as fluid as Reeder does. But that’s nothing compared to the constant launch and relauch dance, I have to do with Reeder nowadays. Consistency and stability matter a lot to me.
Even better, it syncs natively with Miniflux, the service I use to actually fetch and read RSS feeds on my Linux desktop. No more Google Reader API!

This is a list of all my categories (with one of them expanded, click for a larger view)

Readkit App Screenshot


and this is a list of unread articles in a feed, alongside one that is open (once again, click to enlarge if you want to see details)

Readkit App Screenshot

That gnawing feeling has now gone away from the back of my brain.
The experience of reading and catching up with the world is once again a glorious experience thanks to ReadKit.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!


November 17, 2025 03:29 AM

Focus on How Long You Read, Not How Much, aka The Best Advice I Could Give You About Reading Lots of Books

Old Post

This is just an old post, about reading, that has moved from the personal section to here.


via a Tom Gauld Book1


Ever so often, after one of my reading updates on social media, some of my young friends ask me how I get so much reading done.

So, I decided to answer it here for posterity and then just point folk here.

  1. You are not me.
    a. I am a book worm.
    b. I am much older than you, with lots more practice.
  2. You most probably want to rush through a hard, technical book.
    a. I find them as hard as you.
    b. I read them at, as slow a pace as you.
    c. I interleave the hard stuff, with a lot of easy, “I-Love-This” fiction
  3. Speed Reading is Bullshit!
    Once you read a lot of books, you can pattern match and speed up or slow down, through whole blocks and paras and chapters and pages.
  4. Reading for studying’s sake is work and unavoidable and not quite related to reading for reading’s sake.
    a. These I pucker up and do anyway, just like taking bad medicine.

The only things that matter, when it comes to reading are …

  1. Be consistent. Read a little bit, daily.
    The trick to reading a lot, is to read a little every day.
  2. And the trick to reading a little every day is to, make it a habit.
  3. Be curious. Read whatever you want. Read whenever you want. Read wherever you want.
  4. Quit Books.
    You don’t have to finish it. You don’t have to slog through it.
    Set it down. Come back to it, tomorrow … or in a few decades.
    Or just throw it out and forget all about it.
  5. And if reading really becomes a sort of calling for you, then learn how to do it properly.2

That’s about it for now. If I remember something more, I’ll come back and tack it on here.


P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


  1. I forget which one! ↩︎

  2. totally optional. I learnt this really late in life and while it has enriched my reading experience, it had nothing to do with my love of reading. ↩︎

November 14, 2025 12:15 AM

Kubernetes, Note to Self: Need Load Balancer Installed on Bare Metal



Intended Audience

Mostly me. Also other grizzly sysadmins who are learning devops like me.

One thing that bit me when I was trying to expose my apps to the world when working on the home cluster, is that Kubernetes on bare metal—I was using Kind at the time—expects to talk to a load balancer service, which then talk to an actual load balancer. Which if you are using bare metal, you won’t usually have.
I had to then go expose a “NodePort” to gain access from outside.

So to expose my stuff in as “real world” a way as possible, we need to:

  1. Either install a load balancer implementation like MetalLB. OR
  2. Use a Kubernetes distribution that has a load balancer implementation built-in, like K3s.

I chose option 2 and used K3s, because I am, as they say in popular parlance, using Kubernetes at the edge.1
In which case, I prefer to have as many batteries built-in as possible.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!



  1. Although, from the articles I’ve read, if you’re doing a multiple node cluster, then you’re better off using MetalLB. ↩︎

November 11, 2025 02:07 AM

How to do polymorphism in C ?

Well the title is a clickbait,but it’s true in a limited sense.

If you've written some C code you've probably used most of the features in C like structure,functions,pointers,arrays and perhaps even the preprocessor.

However, I will talk about one of the lesser used features in C – union


The Union

A union allocates a single shared block of memory, large enough to hold its largest member (with some padding, depending on alignment). Unlike a struct, which allocates distinct memory for each member, a union allows multiple members to occupy the same memory space.

For example:

#include<stdio.h>
#include<string.h>

struct xyz {
    int x;
    float y;
    char z[10];
};

union tuv {
    int t;
    float u;
    char v[10];
};

int main(void) {
    struct xyz st_eg;
    union tuv un_eg;

    printf("%d\n", sizeof(st_eg)); // O/P: 20 bytes (4 + 4 + 10 + 2 bytes padding)
    printf("%d\n", sizeof(un_eg)); // O/P: 12 bytes (10 bytes for v + 2 bytes padding)

    strcpy(&un_eg.v, "HelloWorld");

    printf("%s\n", un_eg.v);  // O/P: HelloWorld 
    printf("%f\n", un_eg.u);  // O/P: 1143139122437582505939828736.000000

    return 0;
}

Here, both the integer, float, and character array occupy the same memory region. When "HelloWorld" is copied into the character array v, reading that memory as a float outputs the string "HelloWorld" typecasted into float a short essay on union.


  • But why do we need union ?
  • Why to allocate memory for only the largest member and not all of them using struct ?

A union is valuable when you want different interpretations of the same memory.


Example 1: Storing an IPv4 Address

#include<stdio.h>

typedef union {
    unsigned int ip_add;
    unsigned char bytes[4];
} ipv4_add;

int main(void) {
    ipv4_add my_address = {0};

    my_address.bytes[0] = 127;
    my_address.bytes[1] = 55;
    my_address.bytes[2] = 115;
    my_address.bytes[3] = 0;

    printf("%x\n", my_address.ip_add); // O/P: 73377f
    return 0;
}

Explanation

Using a union, we can store both the integer representation and the byte-wise representation of an IPv4 address within the same space. This approach eliminates the need for explicit bit-shifting or manual conversions.


Example 2: Unions in Embedded Programming

Unions are widely used in embedded systems to represent hardware registers that can be accessed both as a whole and as individual fields.

#include<stdio.h>

union HWRegister {
    struct { // annonymous structure
        unsigned char parity;
        unsigned char control;
        unsigned char stopbits;
        unsigned char direction;
    };
    unsigned int reg;
};

int main(void) {
    union HWRegister gpioa;

    gpioa.reg = 0x14424423;
    printf("%x\n", gpioa.stopbits); // O/P: 14

    return 0;
}

In this example, the same memory can be accessed as a single 32-bit register or through specific bit fields. This design improves clarity while maintaining memory efficiency — a common requirement in low-level programming.


Example 3: A Glimpse of Polymorphism in C

Now coming back to the title , we can do something similar to OOP in C:

#include<stdio.h>

typedef enum {
    JSON_STR,
    JSON_BYTE,
    JSON_INT,
} json_type_t;

#define JSON_MAX_STR 64

typedef struct {
   json_type_t type;
   union {
       char str[JSON_MAX_STR];
       char byte;
       int number;
   };
} json_t;

void printJSON(json_t *json) {
    switch (json->type) {
        case JSON_STR:
            printf("%s\n", json->str);
            break;
        case JSON_BYTE:
            printf("%c\n", json->byte);
            break;
        case JSON_INT:
            printf("%d\n", json->number);
            break;
    }
}

int main(void) {
    json_t myJSON;
    myJSON.type = JSON_INT;
    myJSON.number = 97;

    printJSON(&myJSON);
    return 0;
}

Here, the structure json_t can hold one of several possible data types — a string, a single byte, or an integer. The active type is determined at runtime using the type field.

There are some issues in this , in C the types are not tightly enforced by the compiler , so if we do

myJSON.type = JSON_STR;// // instead of JSON_INT
myJSON.number = 97;
printJSON(&myJSON); // O/P: a 
  • The output will be : a (the ascii charector of value 97)

And that's all.

print("Titas , signing out ")
November 06, 2025 02:31 PM

Rescue OpenSUSE Tumbleweed (recreate Grub config from Rescue System)

In last 6 months time, twice I had to get my work machine’s system board (the motherboard) replaced.

First for an “Integrated Graphics Error”. One day I got these very annoying beeps on my work machine, I ran a Lenovo Smartbeep scan using their mobile app, and it suggested to contact (immediately) Lenovo support and request for a system board replacement.

Second time, the Wi-Fi (infact all wireless) stopped working on my machine.

For a few weeks following the first system board replacement, I thought it was some wifi firmware mismatch issue on my OpenSUSE Tumbleweed (TW) machine.
Because TW is a rolling release, once in a while distribution upgrade breaks stuff, so its a normal thing.
But I remember for the first few weeks after the hardware replacement, the Wi-Fi worked sometimes.
The Network manager will detect it but then soon after, it started to drop entirely. And I would get “Wifi adapter not found”.
And then from last ~1.5 months, I have been relying entirely on an Ethernet for Internet on my work machine.
And that won’t work when I’m travelling (I know, I can get an external dongle or something, but still).

So, I tried booting with a live USB stick into a Mint Cinnamon machine, and it was clear, it’s not a TW issue. Mint also didn’t detect any wireless network - zero, nil, nothing.
(Not to say, over last months, when I thought it was a firmware issue, I had tried many things, lots around the iwlwifi firmware, but nothing worked. I have been eying many recent upstream kernel bugzillas related to iwlwifi and I was convinced it was a firmwae issue. And it wasn’t).

Now, very fortunately, Lenovo Premium Support just works.
(for me it did! Twice I contacted them in last 6 months, and both times an engineer visited almost on the next day or in two, basically as soon as they had the replacement component delivered to them.)
Both times, they replaced the mother board.
(My work machine is a ThinkPad Workstation and every thing is just stuck on the mother board, so any tiny chip dies and it requires a full system board replacement).

Both times when the mother board was replaced, it’s almost a new machine, only with the same old storage.
(Very very important storage. Because it still contains my old TW OS partitions and data, and all the precious system configurations which takes a very long time to configure again).
I did run backups before both replacements, but still it’s a pain if I have to do a fresh OS reinstallation and setup everything again, in the middle of a work week.

So, when the system board is replaced, I think it refreshes the BIOS, and my grub menu no longer sees the TW OS partitions and so it just directly boots into the mighty Windows Blue Screen screaming “the system can’t be fixed, and I need to do a fresh install”.

But don’t get fooled by that (not immediately, check once).
Chances are that the old OS partitions are still there, just not being detected by the Grub Bootloader.
And that was the case for me (both times).

And not to my surprise, the OpenSUSE TW “Rescue System” menu came to my resuce!
(well, to my surprise! Because let’s not forget, TW is a rolling release OS. So things can go south very very quickly.)

I did the following:

  • I created a live USB stick with OpenSUSE Tumbleweed.
    (It helped to have a stick with a full Offline image, and not the tiny Network image which will pull every single thing from Internet.
    Because remember “the Wi-FI not working on my machine”.
    Well I could have connected to Ethernet but still, the lesson is to have a proper stick ready with an offline image so it should just boot.)

  • Now, put it in the machine, go to the “Boot Menu” (F10, IIRC), and pick the option to boot from the live USB stick.
    It will go to a grub menu.
    Skip all the immediate menu options like “OpenSUSE Tumbleweed installation”, etc.
    Go to “More …” and then “Rescue System”.
    It will do the usual “loading basic drivers > hardware detection > ask to pick a keyboard layout, et. al” and then give me the “Resuce Login:” prompt.
    Remember the username is “root” and there is no password.
    With that, I enter tty1:resuce:/ #.

    Now run the following set of commands:

    # first things first, check my disk and partitions
    ## (if they still exists, I move on. Otherwise, all is gone and nothing more to do)
    
    fdisk -l
    ## gives me something like following (truncated ofcourse, to the important bits)
    Disk /dev/nvme0n1: xx.xx GiB, xxxxxx bytes, xxxxxx sectors
    Disk model: xxx PC xxxx xxxxx-xxxx-xxxx          
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: xxx
    Disk identifier: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxx
    
    Device           Start   End     Sectors   Size       Type
    /dev/nvme0n1p1   xxx     xxxx    xxxxx     260M       EFI System
    /dev/nvme0n1p2   xxxx    xxxxx   xxxx      xxx.xxG    Linux filesystem
    /dev/nvme0n1p3   xxxxx   xxxxx   xxxx      2G Linux   swap
    
    # in my case, the disk that has the OpenSUSE tumbleweed is "/dev/nvme0n1".
    # "/dev/nvme0n1p1" is the EFI partition
    # "/dev/nvme0n1p2" is the root partition
    # "/dev/nvme0n1p3" is the Swap partition
      
    # From this step onwards:
    # I need "/dev/nvme0n1p1" (EFI System) and "/dev/nvme0n1p2" (Linux Filesystem)
      
    # I need to mount these two partitions under "/mnt"
    # (make sure the `/mnt` directory is empty before mounting anything to it)
    
    cd /mnt
    ls  # should be empty
    cd ..
    
    mount /dev/nvme0n1p2 /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
      
    # next I need to mount "/dev", "/proc", "/sys", "/run" from the live environment into the mount directory
    
    mount -B /dev /mnt/dev
    mount -B /proc /mnt/proc
    mount -B /sys /mnt/sys
    mount -B /run /mnt/run
    
    # now chroot into the "/mnt" directory
    # the prompt will turn into `resuce:/ #` from the earlier `tty1:resuce:/ #`
    
    chroot /mnt   
    
    # now make the EFI variables available
    
    mount -t efivarfs none /sys/firmware/efi/efivars
    
    # now reinstall grub2, with `grub2-install`
    
    grub2-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=opensuse
    ## should output something like:
    Installing for x86_64-efi platform.
    Installation finished. No error reported.
    
    # then probe for other operating systems on this machine
      
    os-prober
    ## should output something like (and because my machine originally came with Windows, it still shows remnants of that)
    /dev/nvme0n1p1@/EFI/Microsoft/Boot/bootmgfw.efi:Windows Boot Manager:Windows:efi
    
    # now, create a new grub configuration file using grub2-mkconfig
    
    grub2-mkconfig -o /boot/grub2/grub.cfg
    ## should output something like:
    Generating grub configuraiton file ...
    Found theme: /boot/grub2/themes/opneSUSE/theme.txt
    Found linux image: /boot/vmlinuz-x.xx.x-x-default
    Found initrd image: /boot/initrd-x.xx.x-x-default
    Warning: os-prober will be executed to detect other bootable partitions.
    Its output will be used to detect bootable binaries on them and create new boot entries.
    Found Windows Boot manager on /dev/nvme0n1p1@/EFI/Microsoft/Boot/bootmgfw.efi
    Adding boot menu entry for UEFI Firmware Settings ...
    done
    
    # If all good so far, exit out of chroot
    
    exit
    
    # Reboot the machine.
    # And remove the installation media
    
    reboot
    
  • As the system reboots (remember I have removed the installation media at this point), I go to the BIOS to confirm the boot order.

  • Now, I let it boot normally and it should give me a grub menu with proper “OpenSUSE Tumbleweed” boot entry.
    And that should work like before.
    Boot and login! (It did, twice I followed this process and it did).

    (A note - When I reboot into OpenSUSE TW after this new grub config creation on a new system board, I need to also make sure that “Secure Boot” is disabled in BIOS menu.
    Otherwise it will not allow the OpenSUSE TW to boot. It didn’t for me because my Secure Boot was enabled.
    So, I had to disable it. And then it worked.
    After first successful boot, I think I can enable it again.)

None of this process is my own making.
The whole credit goes to this very useful demo - How To Fix Grub in OpenSUSE | UEFI (KMDTech) (Thank you!)
It worked twice for me without a single hiccup.
(but ofcourse, if you need to follow, please do it carefully, after checking, cross-checking and understanding everything you’re typing.)

November 02, 2025 12:00 AM

Setup (again): HP Laser MFP 136nw

A few weeks back, I had to reset my router for firmware updates.

And because of that, some devices on my local network, in this case my HP printer stopped working on Wi-Fi.

I followed the following steps today, to make it work again.
(MJB helped lots! Thank you so much! Very grateful!)

  • I realised the IP (http://192.168.1.z) that was assigned to my HP Printer (before router reset) is now taken by some other device on my local network because of DHCP dynamically assigning IPs.

  • I connect to the HP Printer via Ethernet, access the HP Printer configuration page on the last assigned IP (http://192.168.1.z). (Because now I am connected to the HP Printer via Ethernet, the router give preference to the HP Printer on the above IP, even though DHCP Server had assigned this IP to another device on my network.)

  • I login to my HP Printer configuration page.
    I go to “Settings > Network Settings > Wi-Fi”.
    On this menu page, click on “Wi-Fi Settings > Advanced Settings > Network Setup”.
    Go to “SSID” and hit “Search List” and “Refresh”.
    From the drop down, pick the Wi-Fi SSID I want to connect to, at the bottom, pick “Network Key Setup” and put the updated SSID password in there (both in “Network Key” and “Confirm Network Key”).
    Don’t forget to hit “Apply”.

  • Now other thing I have to fix is that the IP address is still assigned by the Router’s DHCP server to another device on LAN.
    I need to assign a proper IP to my HP Printer outside the range of IPs available to DHCP server to assign to devices dynamically.

  • For that, go to the Router admin page, login and go to “Local Network > LAN > IPv4”.
    Then go to the section “DHCP Server” and change “DHCP Start IP Address” and “DHCP End IP Address” respectively to some “192.168.1.a” and “192.168.1.b” and hit “Apply”.
    With this the router will now have IP “192.168.1.a-1” and DHCP server will only be able to assign dynamically IPs to devices within the assigned pool only.
    (with this, what I am trying is to limit the pool of IPs available to DHCP server so that I can assign an IP (“192.168.1.b+1”) to HP Printer outside the limits of this available DHCP server IP pool manually. So, that the printer IP doesn’t conflict with any other device IP assigned by DHCP server.)

  • Now login back to the Printer configuration page, go to “Settings > Network Settings > TCP/IPv4”.
    Here, in the “General” section, pick “Manual” under the “Assign IPv4 Address” option.
    And manually assign the following - (1) “IPv4 Address: 192.168.1.b+1”, (2) “Subnet Mask: 255.255.255.0”, and (3) “Gateway Address: 192.168.1.a-1” (should match the router IP address) to HP Printer.
    And hit “Apply”.
    With this, the HP Printer configuration page itself will reload to the new assigned IP address url (http://192.168.1.b+1).

  • After above steps, I then remove the Ethernet from the HP Printer, restart it.
    And check if I am still able to access the HP Printer on the assigned IP via Wi-Fi (http://192.168.1.b+1).
    Yes! It worked now!

  • Then now I need to test, whether printing works on Wi-Fi.
    I am on an OpenSUSE Tumbleweed machine.
    I go to the “Settings > Printers” page.
    I have to make sure that my printer is showing up there.
    (It wasn’t before, I had to once manually add a printer and pick up the latest available matching model from the available database, but that’s not needed after my steps below.)

  • Yes, printer shows up. I gave a test print job. Printing on Wi-Fi is working now.

  • But Scanning still doesn’t work. Neither on Wifi, nor on Ethernet.
    My system just doesn’t detect the scanner on my HP Printer at all.

  • Now, I go back to HP Printer configuration page (http://192.168.1.b+1).
    Go to “Settings > Network Settings” and ensure that “AirPrint” and “Bonjour(mDNS)” both are enabled.

  • Now, I need to do a few things at the OS level.
    Install (or ensure if already installed) the following set of packages.

    # install required packages 
      
    sudo zypper install avahi avahi-utils nss-mdns
    sudo zypper install hplip hplip-sane sane-backends
    sudo zypper install sane-airscan ipp-usb
    
    # enable, start and verify status of avahi daeomon (make sure I use `-l` flag to have all available information from the service status output)
    
    sudo systemctl enable avahi-daemon
    sudo systemctl start avahi-daemon
    sudo systemctl status -l avahi-daemon.service
    
    # make sure I have all avahi-related tools as well
      
    which avahi-browse
    which avahi-resolve
    which avahi-publish
    rpm -ql avahi | grep bin # gives me `/usr/sbin/avahi-daemon` and `/usr/sbin/avahi-dnsconfd`
    
    
    # ensure firewall is allowing mdns and ipp
    
    sudo firewall-cmd --permanent --add-service=mdns
    sudo firewall-cmd --permanent --add-service=ipp
    sudo firewall-cmd --reload
    sudo firewall-cmd --info-service=mdns
    sudo firewall-cmd --info-service=ipp
    
    # and restart firewall
      
    sudo systemctl restart firewalld
    sudo systemctl status -l firewalld
    
    # now check if avahi-browse can see devices advertised by my HP Printer
    
    avahi-browse -a | grep HP
    ## output something like following. I need to make sure it has `_scanner._tcp` and `_uscan._tcp` showing up
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _ipp._tcp            local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _scanner._tcp        local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _uscan._tcp          local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _uscans._tcp         local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _http._tcp           local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _pdl-datastream._tcp local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _printer._tcp        local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _http-alt._tcp       local
      + wlp0s20f3 IPv4 HP Laser MFP 136nw (xx:xx:xx)                 _privet._tcp         local
    
    # if device shows up, then check is scanner is responding on the network
    
    ping -c3 192.168.1.x
    
    curl http://192.168.1.x:8080/eSCL # any xml ouput is fine as long as there's something
    
    # final check
      
    scanimage -L
    ## it should list something like:
    device `airscan:xx:xx Laser MFP 136nw (xx:xx:xx)' is a eSCL HP Laser MFP 136nw (xx:xx:xx) ip=192.168.1.x
    
  • At this point, the “Scan Documents” app should be detecting the scanner on my HP printer (it did!)

  • Also, with Avahi working, my OS system “Settings > Printers” also got a HP Printer added automatically with the correct model name etc.
    (Scanner also, although that doesn’t show up as a menu item in the system settings.)

November 01, 2025 12:00 AM

Not anymore a director at the PSF board

This month I did the last meeting as a director of the Python Software Foundation board, the new board already had their first meeting.

I decided not to rerun in the election as:

  • I was a director from 2014 (except 1 year when python's random call decided to choose another name), means 10 years and that is long enough.
  • Being an immigrant in Sweden means my regular travel is very restricted and that stress effects all parts of life.

When I first ran in the election I did not think it would continue this long. But, the Python community is amazing and I felt I should continue. But, the brain told me to give out the space to new folks.

I will continue taking part in all other community activities.

October 31, 2025 01:00 PM

ssh version output in stderr

Generally Linux commands print their version on stdout, for example
git --version or python --version. But not ssh. ssh -V prints output to stderr.

To test it you can do the following:

git version on stdout

> git --version 2> error 1> output 
> cat output
git version 2.51.0

ssh version on stderr

> ssh -V 2>> error 1>> output
> cat error
OpenSSH_9.9p1, OpenSSL 3.2.4 11 Feb 2025

Hope this will be helpful.

by Anwesha Das at October 12, 2025 09:01 PM

Joy of automation

After 145+ commits spread over multiple PRs, 450+ conversations and feedback, and accountable communication via several different communication mediums spanning over 2 years, the Ansible Release Management is finally completely automated, using GitHub Actions. When I joined Red Hat in November 2022, I was tasked with releasing the Ansible Community Package.

The first hurdle I faced was that there was no documented release process. What we had were release managers&apos private notes. It was over in personal repositories, internal Red Hat Google Docs, and personal code. Since all those past release managers left the organization (apart from one), it was very difficult to gather and figure out what, why, and how the release process worked. I had one supporter, my trainer (the then-release manager), Christian. He shared with me his notes and the steps he followed. He guided me on how he did the release.

Now we have a community release managers working group where contributors from the community also take part and release Ansible. And we have the two aforementioned GitHub actions.

  • First one builds the package and also opens a PR to the repository, and then waits for human input.
  • Meanwhile, the release manager can use the second action to create another PR to the Ansible documentation repository from the updated porting guide from the first PR.
  • After the PRs are approved, the release manager can continue with the first action and release the Ansible wheel package and the source tarball to PyPI in a fully automated way using trusted publishing.

I would like to thank Felix, Gotmax and Sviatoslav for feedback during the journey, thank you.

Many say automation is bad. In many companies, management gets the wrong idea that, when good automation is in place, they can fire senior engineers and get interns or inexperienced people to get the job done. That works till something breaks down. The value of experience comes when we have to fix things in automation. Automation enables new folks to get introduced to things, and enables experienced folks to work on other things.

by Anwesha Das at July 27, 2025 10:42 PM

Arrow Function vs Regular Function in JavaScript

Arrow Function vs Regular Function in JavaScript

Yeah, everyone already knows the syntax is different. No need to waste time on that.

Let’s look at what actually matters — how they behave differently.


1. arguments

Regular functions come with this built-in thing called arguments object. Even if you don’t define any parameters, you can still access whatever got passed when the function was called.

Arrow functions? Nope. No arguments object. Try using it, and it’ll just throw an error.

Regular function:

function test() {
  console.log(arguments);
}

test(1, "hello world", true); 
// o/p
// { '0': 1, '1': 'hello world', '2': true }

Arrow function:

const test = () => {
  console.log(arguments); 
};

test(1, "hello world", true); // Throws ReferenceError

2. return

Arrow functions have implicit return but regular functions don't. i.e We can return the result automatically if we write it in a single line , inside a parenthesis in arrow functions. Regular functions always require the return keyword.

Regular function:

function add(a, b) {
 const c = a + b;
}

console.log(add(5, 10)); // o/p : undefined 

Arrow function:

const add = (a, b) => (a + b);

console.log(add(5, 10)); // o/p : 15

3. this

Arrow functions do not have their own this binding. Instead, they lexically inherit this from the surrounding (parent) scope at the time of definition. This means the value of this inside an arrow function is fixed and cannot be changed using .call(), .apply(), or .bind().

Regular functions, on the other hand, have dynamic this binding — it depends on how the function is invoked. When called as a method, this refers to the object; when called standalone, this can be undefined (in strict mode) or refer to the global object (in non-strict mode).

Because of this behavior, arrow functions are commonly used in cases where you want to preserve the outer this context, such as in callbacks or within class methods that rely on this from the class instance.

Regular function :

const obj = {
  name: "Titas",
  sayHi: function () {
    console.log(this.name);
  }
};

obj.sayHi(); // o/p : Titas

Arrow function :

const obj = {
  name: "Titas",
  sayHi: () => {
    console.log(this.name);
  }
};

obj.sayHi(); // o/p :  undefined

print("Titas signing out !")
July 23, 2025 07:20 PM

Debugging maxlocksper_transaction: A Journey into Pytest Parallelism

So I was fixing some slow tests, and whenever I ran them through the pytest command, I was greeted with the dreaded max_locks_per_transaction error.

My first instinct? Just crank up the max_locks_per_transaction from 64 to 1024.

But... that didn’t feel right. I recreate my DB frequently, which means I’d have to set that value again and again. It felt like a hacky workaround rather than a proper solution.

Then, like any developer, I started digging around — first checking the Confluence page for dev docs to see if anyone else had faced this issue. No luck. Then I moved to Slack, and that’s where I found this command someone had shared:

pytest -n=0

This was new to me. So, like any sane dev in 2025, I asked ChatGPT what this was about. That’s how I came across pytest-xdist.

What is pytest-xdist?

The pytest-xdist plugin extends pytest with new test execution modes — the most common one is distributing tests across multiple CPUs to speed up test execution.

What does pytest-xdist do?

Runs tests in parallel using <numprocesses> workers (Python processes), which is a game changer when: – You have a large test suite
– Each test takes a significant amount of time
– Your tests are independent (i.e., no shared global state)


That’s pretty much it — I plugged in pytest -n=0 and boom, no more transaction locks errors.

Cheers!

References – https://pytest-xdist.readthedocs.io/en/stable/https://docs.pytest.org/en/stable/reference/reference.html

#pytest #Python #chatgpt #debugging

July 16, 2025 05:07 PM

Creating Pull request with GitHub Action

---
name: Testing Gha
on:
  workflow_dispatch:
    inputs:
      GIT_BRANCH:
        description: The git branch to be worked on
        required: true

jobs:
  test-pr-creation:
    name: Creates test PR
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
      contents: write
    env:
      GIT_BRANCH: ${{ inputs.GIT_BRANCH }}
    steps:
      - uses: actions/checkout@v4
      - name: Updates README
        run: echo date >> README.md

      - name: Set up git
        run: |
          git switch --create "${GIT_BRANCH}"
          ACTOR_NAME="$(curl -s https://api.github.com/users/"${GITHUB_ACTOR}" | jq --raw-output &apos.name // .login&apos)"
          git config --global user.name "${ACTOR_NAME}"
          git config --global user.email "${GITHUB_ACTOR_ID}+${GITHUB_ACTOR}@users.noreply.github.com"

      - name: Add README
        run: git add README.md

      - name: Commit
        run: >-
          git diff-index --quiet HEAD ||
          git commit -m "test commit msg"
      - name: Push to the repo
        run: git push origin "${GIT_BRANCH}"

      - name: Create PR as draft
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >-
          gh pr create
          --draft
          --base main
          --head "${GIT_BRANCH}"
          --title "test commit msg"
          --body "pr body"

      - name: Retrieve the existing PR URL
        id: existing-pr
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >
          echo -n pull_request_url= >> "${GITHUB_OUTPUT}"

          gh pr view
          --json &aposurl&apos
          --jq &apos.url&apos
          --repo &apos${{ github.repository }}&apos
          &apos${{ env.GIT_BRANCH }}&apos
          >> "${GITHUB_OUTPUT}"
      - name: Select the actual PR URL
        id: pr
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >
          echo -n pull_request_url=
          >> "${GITHUB_OUTPUT}"

          echo &apos${{steps.existing-pr.outputs.pull_request_url}}&apos
          >> "${GITHUB_OUTPUT}"

      - name: Log the pull request details
        run: >-
           echo &aposPR URL: ${{ steps.pr.outputs.pull_request_url }}&apos | tee -a "${GITHUB_STEP_SUMMARY}"


      - name: Instruct the maintainers to trigger CI by undrafting the PR
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >-
            gh pr comment
            --body &aposPlease mark the PR as ready for review to trigger PR checks.&apos
            --repo &apos${{ github.repository }}&apos
            &apos${{ steps.pr.outputs.pull_request_url }}&apos

The above is an example of how to create a draft PR via GitHub Actions. We need to give permissions to the GitHub action to create PR in a repository (workflow permissions in the settings).

workflow_permissions.png

Hopefully, this blogpost will help my future self.

by Anwesha Das at July 06, 2025 06:22 PM

ChatGPT and Images

I’ve been working on a few side projects and using ChatGPT for ideation and brainstorming around ideas and features for the MVP. As part of this, I needed a logo for my app. Naturally, I turned to AI to help me generate one.

However, I noticed that when generating images, ChatGPT doesn’t always follow the guidelines perfectly. Each time I asked for a new version, it would create a completely different image, which made it difficult to iterate or make small tweaks.

But I found a better way.

Instead of generating a brand new image every time, I first explained my app idea and the name. ChatGPT generated an image I liked.

So I asked ChatGPT to generate the JSON for the image instead. I then manually tweaked the JSON file to adjust things exactly the way I wanted. When I asked ChatGPT to generate the image based on the updated JSON, it finally created the image as per my request — no random changes, just the specific adjustments I needed.

Exploration Phase

SplitX logo

{
  "image": {
    "file_name": "splitX_icon_with_text.png",
    "background_color": "black",
    "elements": [
      {
        "type": "text",
        "content": "SplitX",
        "font_style": "bold",
        "font_color": "white",
        "position": "center",
        "font_size": "large"
      },
      {
        "type": "shape",
        "shape_type": "X",
        "style": "geometric split",
        "colors": [
          {
            "section": "top-left",
            "gradient": ["#FF4E50", "#F9D423"]
          },
          {
            "section": "bottom-left",
            "gradient": ["#F9D423", "#FC913A"]
          },
          {
            "section": "top-right",
            "gradient": ["#24C6DC", "#514A9D"]
          },
          {
            "section": "bottom-right",
            "gradient": ["#514A9D", "#E55D87"]
          }
        ],
        "position": "center behind text",
        "style_notes": "Each quadrant of the X has a distinct gradient, giving a modern and vibrant look. The X is split visually in the middle, aligning with the 'Split' theme."
      }
    ]
  }
}

Final Design

SplitX logo Updated JSON

{
  "image": {
    "file_name": "splitX_icon_with_text.png",
    "background_color": "transparent",
    "elements": [
      {
        "type": "shape",
        "shape_type": "X",
        "style": "geometric split",
        "colors": [
          {
            "section": "top-left",
            "gradient": [
              "#FF4E50",
              "#F9D423"
            ]
          },
          {
            "section": "bottom-left",
            "gradient": [
              "#F9D423",
              "#FC913A"
            ]
          },
          {
            "section": "top-right",
            "gradient": [
              "#24C6DC",
              "#514A9D"
            ]
          },
          {
            "section": "bottom-right",
            "gradient": [
              "#514A9D",
              "#E55D87"
            ]
          }
        ],
        "position": "center ",
        "style_notes": "Each quadrant of the X has a distinct gradient, giving a modern and vibrant look. The X is split visually in the middle, aligning with the 'Split' theme."
      }
    ]
  }
}

If you want to tweak or refine an image, first generate the JSON, make your changes there, and then ask ChatGPT to generate the image using your updated JSON. This gives you much more control over the final result.

Cheers!

P.S. Feel free to check out the app — it's live now at https://splitx.org/. Would love to hear what you think!

July 03, 2025 01:28 PM

How I understood the importance of FOSS i.e Free and Open Source Software

Hello people of the world wide web.
I'm Titas, a CS freshman trying to learn programming and build some cool stuff. Here's how I understood the importance of open source.

The first time I heard of open source was about 3 years ago in a YouTube video, but I didn't think much of it.
Read about it more and more on Reddit and in articles.

Fast forward to after high school — I'd failed JEE and had no chance of getting into a top engineering college. So I started looking at other options, found a degree and said to myself:
Okay, I can go here. I already know some Java and writing code is kinda fun (I only knew basics and had built a small game copying every keystroke of a YouTube tutorial).
So I thought I could learn programming, get a job, and make enough to pay my bills and have fun building stuff.

Then I tried to find out what I should learn and do.
Being a fool, I didn't look at articles or blog posts — I went to Indian YouTube channels.
And there was the usual advice: Do DSA & Algorithms, learn Web Development, and get into FAANG.

I personally never had the burning desire to work for lizard man, but the big thumbnails with “200k”, “300k”, “50 LPA” pulled me in.
I must’ve watched 100+ videos like that.
Found good creators too like Theo, Primeagen, etc.

So I decided I'm going to learn DSA.
First, I needed to polish my Java skills again.
Pulled out my old notebook and some YT tutorials, revised stuff, and started learning DSA.

It was very hard.
Leetcode problems weren't easy — I was sitting for hours just to solve a single problem.
3 months passed by — I learnt arrays, strings, linked lists, searching, sorting till now.
But solving Leetcode problems wasn't entertaining or fun.
I used to think — why should I solve these abstract problems if I want to work in FAANG (which I don't even know if I want)?

Then I thought — let's learn some development.
Procrastinated on learning DSA, and picked up web dev — because the internet said so.
Learnt HTML and CSS in about 2-3 weeks through tutorials, FreeCodeCamp, and some practice.

Started learning JavaScript — it's great.
Could see my output in the browser instantly.
Much easier than C ,which is in my college curriculum (though I had fun writing C).

Started exploring more about open source on YouTube and Reddit.
Watched long podcasts to understand what it's all about.
Learnt about OSS — what it is, about Stallman, GNU, FOSS.
OSS felt like an amazing idea — people building software and letting others use it for free because they feel like it.
The community aspect of it.
Understood why it's stupid to have everything under control of a capitalist company — who can just one day decide to stop letting you use your own software that you paid for.

Now I’m 7 months into college, already done with sem 1, scored decent marks.
I enjoy writing code but haven't done anything substantial.
So I thought to ask for some help. But who to ask?

I remembered a I've heard about this distant cousin Kushal who lives in Europe and has done some great software and my mother mentioned him like he was some kind of a genius .I once had a brief conversation with him via text regarding if I should take admission in BCA than an engineering degree, and his advice gave me some motivation and positivity . He said:

“BCA or Btech will for sure gets a job faster Than tradional studying If you can put in hours, that is way more important than IQ.
I have very average IQ but I just contributed to many projects.”

So 7 months later, I decided to text him again — and surprisingly, he replied and agreed to talk with me on a call.
Spoke with him for 45 odd minutes and asked a bunch of questions about software engineering, his work, OSS, etc.

Had much better clarity after talking with him.
He gave me the dgplug summer training docs and a Linux book he wrote.

So I started reading the training docs.

  • Step 0: Install a Linux distro → already have it ✅
  • Step 1: Learn touch typing → already know it ✅

Kept reading the training docs.
Read a few blog posts on the history of open source — already knew most of the stuff but learnt some key details.

Read a post by Anwesha on her experience with hacking culture and OSS as a lawyer turned software engineer — found it very intriguing.

Then watched the documentaries Internet's Own Boy and Coded Bias.
Learnt much more about Aaron Swartz than I knew — I only knew he co-founded Reddit and unalived himself after getting caught trying to open-source the MIT archives.

Now I had a deeper understanding of OSS and the culture.
But I had a big question about RMS — why was he so fixated on the freedom to hack and change stuff in the software he owned?
(Yes, the Free in FOSS doesn’t stand for free of cost — it stands for freedom.)

I thought free of cost makes sense — but why should someone have the right to make changes in a paid software?
Couldn't figure it out.
Focused on JS again — also, end-semester exams were coming.
My university has 3 sets of internal exams before the end-semester written exams. Got busy with that.

Kept writing some JS in my spare time.
Then during my exams...

It was 3:37 am, 5 June. I had my Statistics exam that morning.
I was done with studying, so I was procrastinating — watching random YouTube videos.
Then this video caught my attention:
How John Deere Steals Farmers of $4 Billion a Year

It went deep into how John Deere installs software into their tractors to stop farmers and mechanics from repairing their own machines.
Only authorized John Deere personnel with special software could do repairs.
Farmers were forced to pay extra, wait longer, and weren’t allowed to fix their own property.

Turns out, you don’t actually buy the tractor — you buy a subscription to use it.
Even BMW, GM, etc. make it nearly impossible to repair their cars.
You need proprietary software just to do an oil change.

Car makers won’t sell the software to these business owners, BUT they'll offer 7500$/year subscriptions to use their software. One auto shop owner explained how he has to pay $50,000/year in subscriptions just to keep his business running.

These monopolies are killing small businesses.

It’s not just India — billion-dollar companies everywhere are hell-bent on controlling everything.
They want us peasants to rent every basic necessity — to control us.

And that night, at 4:15 AM, I understood:

OSS is not just about convenience.
It’s not just for watching movies with better audio or downloading free pictures for my college projects.
It’s a political movement — against control.
It’s about the right to exist, and the freedom to speak, share, and repair.


That's about it. I'm not a great writer — it's my first blog post.

Next steps?
Learn to navigate IRC.
Get better at writing backends in Node.js.
And I'll keep writing my opinions, experiences, and learnings — with progressively better English.

print("titas signing out , post '0'!")
June 23, 2025 07:55 AM

OpenSSL legacy and JDK 21

openssl logo

While updating the Edusign validator to a newer version, I had to build the image with JDK 21 (which is there in Debian Sid). And while the application starts, it fails to read the TLS keystore file with a specific error:

... 13 common frames omitted
Caused by: java.lang.IllegalStateException: Could not load store from '/tmp/demo.edusign.sunet.se.p12'
at org.springframework.boot.ssl.jks.JksSslStoreBundle.loadKeyStore(JksSslStoreBundle.java:140) ~[spring-boot-3.4.4.jar!/:3.4.4]
at org.springframework.boot.ssl.jks.JksSslStoreBundle.createKeyStore(JksSslStoreBundle.java:107) ~[spring-boot-3.4.4.jar!/:3.4.4]
... 25 common frames omitted
Caused by: java.io.IOException: keystore password was incorrect
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2097) ~[na:na]
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:228) ~[na:na]
at java.base/java.security.KeyStore.load(KeyStore.java:1500) ~[na:na]
at org.springframework.boot.ssl.jks.JksSslStoreBundle.loadKeyStore(JksSslStoreBundle.java:136) ~[spring-boot-3.4.4.jar!/:3.4.4]
... 26 common frames omitted
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
... 30 common frames omitted

I understood that somehow it is not being able to read file due to bad passphrase. But, the same file with same passphrase can be opened by the older version of the application (in the older containers).

After spending too many hours reading, I finally found the trouble. The openssl was using too new algorithm. By default it will use AES_256_CBC for encryption and PBKDF2 for key derivation. But, if we pass -legacy to the openssl pkcs12 -export command, then it using RC2_CBC or 3DES_CBC for certificate encryption depening if RC2 cipher is enabled.

This finally solved the issue and the container started cleanly.

June 04, 2025 02:06 PM

PyCon Lithuania, 2025

Each year, I try to experience a new PyCon. 2025, PyCon Lithuania was added to my PyCon calendar.

pyon_lt_6.jpg

Day before the conference

What makes this PyCon, is that we were traveling there as a family and the conference days coincided with the Easter holidays. We utilized that to explore the city—the ancient cathedrals, palaces, old cafes, and of course the Lithuanian cuisine. Šaltibarščiai, Balandeliai and Cepelinai.

Tuesday

22nd, the day before the conference was all about practicing the talk and meeting with the community. We had the pre-conference mingling session with the speakers and volunteers. It was time to meet some old and many new people. Then it was time for PyLadies. Inga from PyLadies Lithuania, Nina from Pyladies London and I had a lovely dinner discussion—good food with the PyLadies community,technology, and us.

pyon_lt_2.jpg

Wednesday

The morning started early for us on the day of the conference. All the 3 of us had different responsibilities during the conference. While Py was volunteering, I talked and Kushal was the morning keynoter A Python family in a true sense :)

pyon_lt_1.jpg

I had my talk, “Using PyPI Trusted Publishing to Ansible Release” scheduled for the afternoon session. The talk was about automating the Ansible Community package release process with GitHub action using the trusted publisher in PyPI. The talk described - what is trusted publishing.I explanined the need for it and the usage of trusted publishing. I explained the Ansible manual release process in a nutshell and then moved to what the Ansible release process is now with  GitHub actions and Trusted Publishing. Then the most important part is, the lessons learned in the process and how other open-source communities can get help and benefit from it.Here is the link for the slides of my talk I had questions regarding trusted publishing, experience as a release manager, and of course Ansible.

pyon_lt_0.jpeg

It was the time to bid goodbye to PyCon Lt and come back home. See you next year. Congratulatios organizers for doing a great job in organizing the coference.

pyon_lt_4.jpg

by Anwesha Das at April 30, 2025 10:49 AM

Blog Questions Challenge 2025

1. Why did you make the blog in the first place?

This blog initially started as part of the summer training by DGPLUG, where the good folks emphasize the importance of blogging and encourage everyone to write—about anything! That motivation got me into the habit, and I’ve been blogging on and off ever since.

2. What platform are you using to manage your blog and why did you choose it?

I primarily write on WriteFreely, hosted by Kushal, who was kind enough to host an instance. I also occasionally write on my WordPress blog. So yeah, I have two blogs.

3. Have you blogged on other platforms before?

I started with WordPress because it was a simple and fast way to get started. Even now, I sometimes post there, but most of my recent posts have moved to the WriteFreely instance.

4. How do you write your posts?

I usually just sit down and write everything in one go. Followed by editing part—skimming through it once, making quick changes, and then hitting publish.

5. When do you feel most inspired to write?

Honestly, I don’t wait for inspiration. I write whenever I feel like it—sometimes in a diary, sometimes on my laptop. A few of those thoughts end up as blog posts, while the rest get lost in random notes and files.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

It depends. After reading a few books and articles on writing, I started following a simple process: finish a draft in one sitting, come back to it later for editing, and then publish.

7. Your favorite post on your blog?

Ahh! This blog post on Google Cloud IAM is one I really like because people told me it was well-written! :)

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

Nope! I like it as it is. Keeping it simple for now.

A big thanks to Jason for mentioning me in the challenge!

Cheers!

March 29, 2025 05:32 AM

I quit everything

I quit everything

I quit everything

title: I quit everything
published: 2025-02-19

previous [‘Simple blogging engine’] parent directory

I have never been the social media type of person. But that doesn’t mean I don’t want to socialize and get/stay in contact with other people. So although not being a power-user, I always enjoyed building and using my online social network. I used to be online on ICQ basically all my computer time and I once had a rich Skype contact list.

However, ICQ just died because people went away to use other services. I remember how excited I was when WhatsApp became available. To me it was the perfect messenger; no easier way to get in contact and chat with your friends and family (or just people you somehow had in your address book), for free. All of those services I’ve ever been using followed one of two possible scenarios:

  • Either they died because people left for the bigger platform
  • or the bigger platform was bought and/or changed their terms of use to make any further use completely unjustifiable (at least for me)

Quitstory

  • 2011 I quit StudiVZ, a social network that I joined in 2006, when it was still exclusive for students. However, almost my whole bubble left for Facebook so to stay in contact I followed. RIP StudiVZ, we had a great time.
  • Also 2011 I quit Skype, when it was acquired by Microsoft. I was not too smart back then, but I already knew I wanted to avoid Microsoft. It wasn’t hard anyway, most friends had left already.
  • 2017 I quit Facebook. That did cost me about half of my connections to old school friends (or acquaintances) and remote relatives. But the terms of use (giving up all rights on any content to Facebook) and their practices (crawling all my connections to use their personal information against them) made it impossible for me to stay.
  • 2018 I quit WhatsApp. It was a hard decision because, as mentioned before, I was once so happy about this app’s existence, and I was using it as main communication channel with almost all friends and family. But 2014 WhatsApp was bought by Facebook. In 2016 it was revealed that Facebook was combining the data from messenger and Facebook platform for targeted advertising and announced changes on terms of use. For me it was not possible to continue using the app.
  • Also 2018 I quit Twitter. Much too late. It has been the platform that allowed the rise of an old orange fascist, gave him the stage he needed and did by far not enough against false information spreading like crazy. I didn’t need to wait for any whistle blowers to know that the recommendation algorithm was favoring hate speech and miss-information, to know that this platform was not good for my mental health, anyway. I’m glad though, I was gone before the takeover.
  • Also 2018 I quit my Google account. I was using it to run my Android phone, mainly. However, quitting Google never hurt me - syncing my contacts and calendars via cardDAV and calDAV has always been painless. Google circles (which I peeked into for a week or so) never became a think anyway. I started using custom roms (mainly Cyanogen, later lineage OS) for all my phones anyway.
  • 2020 I quit Amazon. Shopping is actually more fun again. I still do online shopping occasionally, most often trying to buy from the manufacturers directly, but if I can I try to do offline shopping in our beautiful city.
  • 2021 I quit smartphone. I just stopped using my phone for almost anything except making and receiving calls. I have tried a whole bunch of things to gain control over the device but found that it was impossible for me. I found that the device had in fact more control over me than vice versa; I had to quit.
  • 2024 I quit Paypal. It’s a shame that our banks cannot come up with a convenient solution, and it’s also a shame I helped to make that disgusting person who happens to own Paypal even richer.
  • Also in 2024 I quit Github. It’s the biggest code repository in the world. I’m sure it’s the biggest hoster of FOSS projects, too. Why? Why sell that to a company like Microsoft? I don’t want to have a Microsoft account. I had to quit.

Stopped using the smartphone

Implications

Call them as you may; big four, big five, GAFAM/FAAMG etc. I quit them all. They have a huge impact on our live, and I think it’s not for the better. They all have shown often enough, that they cannot be trusted; they gather and link all information about us they can lay hands on and use them against us, selling us out for the highest bidding (and the second and third highest, because copying digital data is cheap). I’m not regretting my decisions, but they were not without implications. And in fact I am quite pissed because I don’t think it is my fault that I had to quit. It is something that those big tech companies took from me.

  • I lost contact to a bunch of people. Maybe this is a FOMO kind of thing; it’s not that I was in contact with these distant relatives or acquaintances, but I had a low threshold of reaching out. Not so much, anymore.
  • People are reacting angrily if they find they cannot reach me. I am available via certain channels, but a lot of people don’t understand my reasoning to not join the big networks. As if I was trying to make their lives more complicated as necessary.
  • I can’t do OAuth. If online platforms don’t implement their own login and authentication but instead rely on identification via the big IdPs, I’m out. Means I will probably not be able to participate in Advent of Code this year. It’s kind of sad.
  • I’m the last to know. Not being in that WhatsApp group, and not reading the Signal message about the meeting cancellation 5 minutes before scheduled start (because I don’t have Signal on my phone), does have that effect. There has been a certain engagement once, when you agreed to something or scheduled a meeting etc. But these days, everything can be changed and cancelled just minutes before some appointment with a single text message. I feel old(fashioned) when trusting in others’ engagement, but I don’t want to give it up, yet.

Of course there is still potential to quit even more: I don’t have a Youtube account (of course) but I still watch videos there. I do have a Netflix subscription, and cancelling that would put me into serious trouble with my family. I’m also occasionally looking up locations on Google maps, but only if I want to look at the satellite pictures.

However, the web is becoming more and more bloated with ads and trackers, old pages that were fun to browse in the earlier days of the web have vanished; it’s not so much fun to use anymore. Maybe the HTTP/S will be the next thing for me to quit.

Conclusions

I’m still using the internet to read my news, to connect with friends and family and to sync and backup all the stuff that’s important to me. There are plenty of alternatives to big tech that I have found work really well for me. The recipe is almost always the same: If it’s open and distributed, it’s less likely to fall into the hands of tech oligarchs.

I’m using IRC, Matrix and Signal for messaging, daily. Of those, Signal may have the highest risk of disappointing me one day, but I do have faith. Hosting my own Nextcloud and Email servers has to date been a smooth and nice experience. Receiving my news via RSS and atom feeds gives me control over the sources I want to expose myself to, without being flooded with ads.

I have tried Mastodon and other Fediverse networks, but I was not able to move any of my friends there to make it actual fun. As mentioned, I’ve never been too much into social media, but I like(d) to see some vital signs of different people in my life from time to time. I will not do bluesky, as I cannot see how it differs from those big centralized platforms that have failed me.

It’s not a bad online-life and after some configuration it’s no harder to maintain than any social media account, too. I only wish it wouldn’t have been necessary for me to walk this path. The web could have developed much differently, and be an open and welcoming space for everyone today. Maybe we’ll get there someday.

February 19, 2025 12:00 AM

Simple blogging engine

Simple blogging engine

Simple blogging engine

title: Simple blogging engine
published: 2025-02-18

previous [‘Blog Questions Challenge 2025’] next [‘I quit everything’] parent directory

As mentioned in the previous post, I have been using several frameworks for blogging. But the threshold to overcome to begin and write new articles were always too hight to just get started. Additionally, I’m getting more and more annoyed by the internet, or specifically browsing the www via HTTP/S. It’s beginning to feel like hard work to not get tracked everywhere and to not support big tech and their fascist CEOs by using their services. That’s why I found the gemini protocol interesting ever since I got to know about it. I wrote about it before:

Gemini blog post

That’s why I decided to not go for HTTPS-first in my blog, but do gemini first. Although you’re probably reading this as the generated HTML or in your feed reader.

Low-threshold writing

To just get started, I’m now using my tmux session that is running 24/7 on my home server. It’s the session I open by default on all my devices, because it contains my messaging (IRC, Signal, Matrix) and news (RSS feeds). Not it also contains a neovim session that let’s me just push all my thoughts into text files easily and everywhere.

Agate

The format I write in is gemtext, a markup language that is even simpler as Markdown. Gemtext allows three different headings, links, lists, blockquotes and formatted text, and that’s it. And to make my life even easier, I only need to touch a file .directory-listing-ok to let agate create an autoindex of each directory, so I don’t have to take care about house-keeping and linking my articles too much. I just went with this scheme to make sure my posts appear in a correct order:

blog
└── 2025
    ├── 1
    │   └── index.gmi
    └── 2
        └── index.gmi

When pointed to a directory, agate will automatically serve the index.gmi if it finds one.

To serve the files in my gemlog, I just copy them as is, using rsync. If anyone would browse the gemini space I would be done at this point. I’m using agate, a gemini server written in Rust, to serve the static blog. Technically, gemini would allow more than that, using cgi to process requests and dynamically return responses, but simple is just fine.

The not-so-low publishing threshold

However, if I ever want any person to actually read this, sadly I will have to offer more than gemtext. Translating everything into HTML and compiling an atom.xml comes with some more challenges. Now I will need some metadata like title and date. For now I’m just going to add that as formatted text at the beginning of each file I want to publish. The advantage is, that I can filter out files I want to keep private this way. Using ripgrep I just find all files with the published directive and pipe them through my publishing script.

To generate the HTML, I’m going the route gemtext -> markdown -> html, in lack of better ideas. Gemtext to Markdown is trivial, I only need to format the links (using sed in my case). To generate the HTML I use pandoc, although it’s way too powerful and not-lightweight for this task. But I just like pandoc. I’m adding simple.css to I don’t have to fuddle around with any design questions.

Simplecss

I was looking for an atom feed generator, until I noticed how easily this file can be generated manually. Again, a little bit of ripgrep and bash leaves me with an atom.xml that I’m actually quite happy with.

The yak can be shaved until the end of the world

I hope I have put everything out of the way get started easily and quickly. I could configure the system until the end of time to make unimportant things look better, but I don’t want to fall into that trap (again). I’m going to publish my scripts to a public repository soon, in case anyone feels inspired to go a similar route.

February 18, 2025 12:00 AM

Blog Questions Challenge 2025

Blog Questions Challenge 2025

Blog Questions Challenge 2025

title: Blog Questions Challenge 2025
published: 2025-02-18
tags: blogging

previous [‘What is stopping us from using free software?’] next [‘Simple blogging engine’] parent directory

I’m picking up the challenge from Jason Braganza. If you haven’t, go visit his blog and subscribe to the newsletter ;)

Jason’s Blog

1. Why did you make the blog in the first place?

That’s been the first question I asked myself when starting this blog. It was part of the DGPLUG #summertraining and I kind of started without actually knowing what to do with it. But I did want to have my own little corner in cyberspace.

Why another blog?

2. What platform are you using to manage your blog and why did you choose it?

I have a home server running vim in a tmux session. The articles are written as gemtext as I have decided that my gemlog should be the source of truth for my blog. I’ve written some little bash scripts to convert everything to html and atom feed as well, but I’m actually not very motivated anymore to care for website design. Gemtext is the simplest markup language I know and to keep it simple makes the most sense to me.

Gemtext

3. Have you blogged on other platforms before?

I started writing on wordpress.com; without running my own server it has been the most accessible platform to me. When moving to my own infrastructure I used Lektor, a static website generator framework written in Python. It has been quite nice and powerful, but in the end I wanted to get rid of the extra dependencies and simplify even more.

Lektor

4. How do you write your posts?

Rarely. If I write, I just write. Basically the same way I would talk. There were a very few posts when I did some research because I wanted to make it a useful and comprehensive source for future look-ups, but in most cases I’m simply too lazy. I don’t spend much time on structure or thinking about how to guide the reader through my thoughts, it’s just for me and anyone who cares.

5. When do you feel most inspired to write?

Always in situations when I don’t have the time to write, never when I do have the time. Maybe there’s something wrong with me.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

Yes, mostly. I do have a couple of posts that I didn’t publish immediately, so they are still not published. I find it hard to re-iterate my own writing, so I try to avoid it by publishing immediately :)

7. Your favorite post on your blog?

The post I was looking up myself most often is the PostgreSQL migration thing. It was a good idea to write that down ;)

Postgres migration between multiple instances

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

I just did a major refactoring of the system, basically doing everying manually now. It forces me to keep things simple, because I think it should be simple to write and publish a text online. I also hope to have lowered the threshold for me to start writing new posts. So piloting the current system, it is.

February 18, 2025 12:00 AM

pass using stateless OpenPGP command line interface

Yesterday I wrote about how I am using a different tool for git signing and verification. Next, I replaced my pass usage. I have a small patch to use stateless OpenPGP command line interface (SOP). It is an implementation agonostic standard for handling OpenPGP messages. You can read the whole SPEC here.

Installation

cargo install rsop rsop-oct

And copied the bash script from my repository to the path somewhere.

The rsoct binary from rsop-oct follows the same SOP standard but uses the card to signing/decryption. I stored my public key in ~/.password-store/.gpg-key file, which is in turn used for encryption.

Usage

Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)

February 12, 2025 05:26 AM

Using openpgp-card-tool-git with git

One of the power of Unix systems comes from the various small tools and how they work together. One such new tool I am using for some time is for git signing & verification using OpenPGP and my Yubikey for the actual signing operation via openpgp-card-tool-git. I replaced the standard gpg for this usecase with the oct-git command from this project.

Installation & configuration

cargo install openpgp-card-tool-git

Then you will have to configuration your (in my case the global configuration) git configuration.

git config --global gpg.program <path to oct-git>

I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.

git config --global commit.gpgsign true
git config --global tag.gpgsign true

Usage

Before you start using it, you want to save the pin in your system keyring.

Use the following command.

oct-git --store-card-pin

That is it, now your git commit will sign the commits using oct-git tool.

In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.

February 11, 2025 11:12 AM

KubeCon + CloudNativeCon India 2024

Banner with KubeCon and Cloud Native Con India logos

Conference attendance had taken a hit since the onset of the COVID-19 pandemic. Though I attended many virtual conferences and was glad to present at a few like FOSDEM - a conference I had always longed to present at.

Sadly, the virtual conferences did not have the feel of in-person conferences. With 2024 here and being fully vaccinated, I started attending a few in-person conferences. The year started with FOSSASIA in Hanoi, Vietnam, followed by a few more over the next few months.

December 2024 was going to be special as we were all waiting for the first edition of KubeCon + CloudNativeCon in India. I had planned to attend the EU/NA editions of the conference, but visa issues made those more difficult to attend. As fate would have it, India was the one planned for me.

KubeCon + CloudNativeCon India 2024 took place in the capital city, Delhi, India from 11th - 12th December 2024, along with co-events hosted at the same venue, Yashobhoomi Convention Centre on 10th December 2024.

Venue

Let’s start with the venue. As a conference organizer for other conferences, the thing that blew my mind was the venue, YASHOBHOOMI (India International Convention and Expo Centre). The conference venue was huge to accommodate large scale conferences, and I also got to know that the convention centre is still in progress and there are more halls to come. If I heard correctly, there was another parallel conference running in the venue around the same time.

Now, let’s jump to the conference.

Maintainer Summit

The first day of the conference, 10th December 2024, was the CNCF Maintainers Summit. The event is exclusive for people behind CNCF projects, providing space to showcase their projects and meet other maintainers face-to-face.

Due to the chilly and foggy morning, the event started a bit late to accommodate more participants for the very first talk. The event had a total of six talks, including the welcome note. Our project, Flatcar Container Linux, also had a talk accepted: “A Maintainer’s Odyssey: Time, Technology and Transformation”.

This talk took attendees through the journey of Flatcar Container Linux from a maintainer’s perspective. It shared Flatcar’s inspiration - the journey from a “friendly fork” of CoreOS Container Linux to becoming a robust, independent, container-optimized Linux OS. The beginning of the journey shared the daunting red CI dashboard, almost-zero platform support, an unstructured release pipeline, a mammoth list of outdated packages, missing support for ARM architecture, and more – hardly a foundation for future initiatives. The talk described how, over the years, countless human hours were dedicated to transforming Flatcar, the initiatives we undertook, and the lessons we learned as a team. A good conversation followed during the Q&A with questions about release pipelines, architectures, and continued in the hallway track.

During the second half, I hosted an unconference titled “Special Purpose Operating System WG (SPOS WG) / Immutable OSes”. The aim was to discuss the WG with other maintainers and enlighten the audience about it. During the session, we had a general introduction to the SPOS WG and immutable OSes. It was great to see maintainers and users from Flatcar, Fedora CoreOS, PhotonOS, and Bluefin joining the unconference. Since most attendees were new to Immutable OSes, many questions focused on how these OSes plug into the existing ecosystem and the differences between available options. A productive discussion followed about the update mechanism and how people leverage the minimal management required for these OSes.

I later joined the Kubeflow unconference. Kubeflow is a Kubernetes-native platform that orchestrates machine learning workflows through custom controllers. It excels at managing ML systems with a focus on creating independent microservices, running on any infrastructure, and scaling workloads efficiently. Discussion covered how ML training jobs utilize batch processing capabilities with features like Job Queuing and Fault Tolerance - Inference workloads operate in a serverless manner, scaling pods dynamically based on demand. Kubeflow abstracts away the complexity of different ML frameworks (TensorFlow, PyTorch) and hardware configurations (GPUs, TPUs), providing intuitive interfaces for both data scientists and infrastructure operators.

Conference Days

During the conference days, I spent much of my time at the booth and doing final prep for my talk and tutorial.

On the maintainers summit day, I went to check the room for the conference days, but discovered that room didn’t exist in the venue. So, on the conference days, I started by informing the organizers about the schedule issue. Then, I proceeded to the keynote auditorium, where Chris Aniszczyk, CTO, Linux Foundation (CNCF), kicked off the conference by sharing updates about the Cloud Native space and ongoing initiatives. This was followed by Flipkart’s keynote talk and a wonderful, insightful panel discussion. Nikhita’s keynote on “The Cloud Native So Far” is a must-watch, where she talked about CNCF’s journey until now.

After the keynote, I went to the speaker’s room, prepared briefly, and then proceeded to the community booth area to set up the Flatcar Container Linux booth. The booth received many visitors. Being alone there, I asked Anirudha Basak, a Flatcar contributor, to help for a while. People asked all sorts of questions, from Flatcar’s relevance in the CNCF space to how it works as a container host and how they could adapt Flatcar in their infrastructure.

Around 5 PM, I wrapped up the booth and went to my talk room to present “Effortless Clustering: Rethinking ClusterAPI with Systemd-Sysext”. The talk covered an introduction to systemd-sysext, Flatcar & Cluster API. It then discussed how the current setup using Image Builder poses many infrastructure challenges, and how we’ve been utilizing systemd to resolve these challenges and simplify using ClusterAPI with multiple providers. The post-talk conversation was engaging, as we discussed sysext, which was new to many attendees, leading to productive hallway track discussions.

Day 2 began with me back in the keynote hall. First up were Aparna & Sumedh talking about Shopify using GenAI + Kubernetes for workloads, followed by Lachie sharing the Kubernetes story with Mandala and Indian contributors as the focal point. As an enthusiast photographer, I particularly enjoyed the talk presented through Lachie’s own photographs.

Soon after, I proceeded to my tutorial room. Though I had planned to follow the Flatcar tutorial we have, the AV setup broke down after the introductory start, and the session turned into a Q&A. It was difficult to regain momentum. The middle section was filled mostly with questions, many about Flatcar’s security perspective and its integration. After the tutorial wrapped up, lunch time was mostly taken up by hallway track discussions with tutorial attendees. We had the afternoon slot on the second day for the Flatcar booth, though attendance decreased as people began leaving for the conference’s end. The range of interactions remained similar, with some attendees from talks and workshops visiting the booth for longer discussions. I managed to squeeze in some time to visit the Microsoft booth at the end of the conference.

Overall, I had an excellent experience, and kudos to the organizers for putting on a splendid show.

Takeaways

Being at a booth representing Flatcar for the first time was a unique experience, with a mix of people - some hearing about Flatcar for the first time and confusing it with container images, requiring explanation, and others familiar with container hosts & Flatcar bringing their own use cases. Questions ranged from update stability to implementing custom modifications required by internal policies, SLSA, and more. While I’ve managed booths before, this was notably different. Better preparation regarding booth displays, goodies, and Flatcar resources would have been helpful.

The talk went well, but presenting a tutorial was a different experience. I had expected hands-on participation, having recently conducted a successful similar session at rootconf. However, since most KubeCon attendees didn’t bring computers, I plan to modify my approach for future KubeCon tutorials.

At the booth, I also received questions about WASM + Flatcar, as Flatcar was categorized under WASM in the display.


Credits in the photos goes to CNCF posted in the Kubecon + CloudNativeCon India 2024 Flickr album & to @vipulgupta.travel

February 05, 2025 12:00 AM

About

I’m Nabarun Pal, also known as palnabarun or theonlynabarun, a distributed systems engineer and open source contributor with a passion for building resilient infrastructure and fostering collaborative communities. Currently, I work on Kubernetes and cloud-native technologies, contributing to the ecosystem that powers modern distributed applications.

When I’m not deep in code or community discussions, you can find me planning my next adventure, brewing different coffee concoctions, tweaking my homelab setup, or exploring new mechanical keyboards. I believe in the power of open source to democratize technology and create opportunities for everyone to contribute and learn.

A detailed view of my speaking engagements are in the /speaking page.

January 06, 2025 12:00 AM

Keynote at PyLadiesCon!

Since the very inception of my journey in Python and PyLadies, I have always thought of having a PyLadies Conference, a celebration of PyLadies. There were conversations here and there, but nothing was fruitful then. In 2023, Mariatta, Cheuk, Maria Jose, and many more PyLadies volunteers around the globe made this dream come true, and we had our first ever PyLadiesCon.
I submitted a talk for the first-ever PyLadiesCon (how come I didn&apost?), and it was rejected. In 2024, I missed the CFP deadline. I was sad. Will I never be able to participate in PyLadiesCon?

On October 10th, 2024, I had my talk at PyCon NL. I woke up early to practice. I saw an email from PyLadiesCon, titled "Invitation to be a Keynote Speaker at PyLadiesCon". The panic call went to Kushal Das. "Check if there is any attack in the Python server? I got a spamming email about PyLadiesCon and the address is correct. "No, nothing.", replied Kushal after checking. Wait then "WHAT???". PyLadiesCon wants me to give the keynote. THE KEYNOTE in PyLadiesCon.

Thank you Audrey for conceptualizing and creating PyLadies, our home.

keynote_pyladiescon.png

And here I am now. I will give the keynote on 7 December 2024 at PyLadiesCon on how PyLadies gave me purpose. See you all there.

Dreams do come true.

by Anwesha Das at November 29, 2024 05:35 PM

Subscriptions

Planetorium