Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

2025

What do all the stars and daggers after the book titles mean?


Note to self, for this year: Read less, write more notes. Abandon more books.

January

  1. Murder at the Vicarage, Agatha Christie*
  2. The Body in the Library, Agatha Christie*
  3. The Moving Finger, Agatha Christie*
  4. Sleeping Murder, Agatha Christie*
  5. A Murder Is Announced, Agatha Christie*
  6. They Do It with Mirrors, Agatha Christie*
  7. My Horrible Career, John Arundel*
  8. The Veiled Lodger, Sherlock & Co. Podcast*
  9. Hardcore History, Mania for Subjugation II, Episode 72*
  10. A Pocket Full of Rye, Agatha Christie*
  11. 4.50 from Paddington, Agatha Christie*
  12. The Mirror Crack’d From Side to Side, Agatha Christie*
  13. As You Wish: Inconceivable Tales from the Making of The Princess Bride, Cary Elwes & Joe Layden*
  14. A Caribbean Mystery, Agatha Christie*
  15. At Bertram’s Hotel, Agatha Christie*
  16. Nemesis, Agatha Christie*
  17. Miss Marple’s Final Cases, Agatha Christie*

February

  1. A Shadow in Summer, Daniel Abraham*
  2. Black Peter, Sherlock & Co. Podcast, Season 25*
  3. On Writing with Brandon Sanderson, Episodes 1-4, Brandon Sanderson*
  4. A Betrayal in Winter, Daniel Abraham*
  5. I Will Judge You by Your Bookshelf, Grant Snider*
  6. The Art of Living, Grant Snider*
  7. The Shape of Ideas, Grant Snider*
  8. For the Love of Go, John Arundel*
  9. Powerful Command-Line Applications in Go, Ricardo Gerardi*
  10. Learning Go, Jon Bodner*
  11. An Autumn War, Daniel Abraham*
  12. The Price of Spring, Daniel Abraham*
  13. Math for English Majors, Ben Orlin (Notes)*
  14. Empire Podcast, The Three Kings,, Episodes 212–214*#

March

  1. Companion to the Count, Melissa Kendall*
  2. Wisteria Lodge, Sherlock & Co. Podcast, Season 26*
  3. A Story of Love, Minerva Spencer*
  4. The Etiquette of Love, Minerva Spencer*
  5. A Very Bellamy Christmas, Minerva Spencer*

February 28, 2025 06:30 PM

Math for English Majors, Ben Orlin


cover of the book. purple/lavender background, with cartoon stick figure Shakespeare in the front proclaming the book is a human take on the universal language

Ben Orlin, the modern Maths apologist, brings to us what I’d call his Magnum Opus, the modern Mathematician’s Lament.
Everytime, I read an Orlin book, I always wish, I had teachers like him when I was young, so that I would not have had such crippling math-phobia for so much of my life.
It should actually be called, Maths for the Rest of Us.
It’s great! So great!


My highlights, from the book

A mathematician is hardly a reasonable person. More like a feral philosopher or a logician gone rogue.

It’s a bit peculiar to say that I add 4 and 3 to create 7; 4 and 3 simply are 7, irrespective of my efforts. When I multiply or divide or take a logarithm, the result equals whatever it equals, regardless of my labors. I do not, in a strict sense, change the numbers, but merely discover or reveal them. Operations act not upon the quantities themselves, but only upon our understandings of them.

In my childhood, back when money was made of metal and paper instead of software and lies, I learned that two quarters add up to 50 cents: 25 + 25 = 50. I loved knowing this

… the beauty of mathematical language lies in its unreality. Math lets us escape this world of crumbs and mud for a realm of rigorous abstractions. The purer the logic—that is, the further from physical reality—the deeper the truth. “As far as laws of mathematics refer to reality, they are not certain,” said Einstein, “and as far as they are certain, they do not refer to reality.”

I sometimes envision mathematics as a tower. It leads from the earthy crust of everyday experience (piles of cookies; buckets of water; half-dollar coins) up to the thin atmosphere of abstract concepts (Lie groups, whatever those are). There’s pleasure and power in climbing to the upper floors. But there’s an equal pleasure—and a different kind of power—in descending to the bottom. Down there, you can touch the foundations, poke at the joints where math attaches to the world, and fill your half-gallon bucket with a new kind of insight

Consider 2 + 3. If + is a verb, then who is the subject carrying out the addition? Neither 2 nor 3 performs any action; those nouns just sit there being nouns. You’re the one who adds, but you’re not a part of mathematical speech

Such confusion is not limited to the youth. Adult textbook authors have been known to include gratuitous fighter jets and non sequitur cheetahs, as if the secret to math is picturing something, anything. But the hard part of math isn’t remembering what cats look like. It’s making sense of abstract ideas.

The challenge of algebra is to visualize the invisible

“The point of doing algebra,” writes math teacher Paul Lockhart, “is… to move back and forth between several equivalent representations,depending on the situation at hand and depending on our taste. In this sense, all algebraic manipulation is psychological.”

Old information, new formation

I like this rule because it’s rarely even taught. Mathematicians apply it automatically, the way fluent English speakers always say “a big ugly bath toy” and not “a bath ugly big toy.” In fact, as Mark Forsyth points out in The Elements of Eloquence, English adjectives are typically placed in a certain order: opinion, size, age, shape, color, origin, material, purpose. Hence, “a lovely little old rectangular green French silver whittling knife.

Likewise, in math, numbers are typically multiplied in a certain order: numeral, radical, constant, variable. Hence, we write 3πxy2z, and never (unless we’re trying to provoke someone) zy2πx3

Toward the end of a sixth-grade lesson, a chipper young fellow named Kieran raised his hand. “I don’t really understand anything you’re saying,” he informed me. “But I can still get the right answer.” He beamed a patient smile.

I stifled a sigh. “Which part can I help you with?”

“Oh, I don’t need help,” he said. “It’s just that you were talking about this extra stuff. Like, the ideas behind it. I don’t, you know, do that.”

I blinked. He blinked. A great silence passed between us.

“Is that okay?” he concluded. “I mean, as long as I can get the right answer?”

There it was, out in the open: the subtext of almost every lesson I had taught that year. Day after day, I tried to illuminate the logic behind the symbols. Day after day, my students politely ignored my prattle to focus on the symbols themselves. What made that afternoon stand out was that Kieran broke the fourth wall. He uttered the title of the film we were acting in.

To do math, must you think about the ideas, or can you just focus on the symbols?

As David Hilbert quipped, “Mathematics is a game played according to certain simple rules with meaningless marks on paper.” That’s symbol pushing in a nutshell. Language divorced from meaning

But a few weeks after Kieran’s query, I learned that not all mathematicians share my dim view of symbol pushing. I mentioned to my dad (a mathematician himself) that I had started writing an essay titled ”How to Avoid Thinking in Math Class.” Before I could say any more, he gave the project his stamp of approval. “Great,” he said. “I’ve always said that the point of math education is to help you not to think.”

I was taken aback. No, I explained, the title was ironic. On the question of “Should we think?” I was firmly in favor.

“Oh, yes, thinking is good,” he generously conceded. “But it’s too hard to do all the time.”

He (and, indeed, Kieran) had a point. For example, it is an algebraic truth that (x + 1)(x − 1) is the same as x2 − 1. This fact boils down to a repeated application of the distributive property; as such, you can explain it entirely in terms of pile rearrangement. But trying to do so is like ascending a sheer cliff face.

Quite a climb! Invigorating as an occasional workout, but unthinkable as a morning commute. This was exactly my dad’s point: Thinking is good. But it’s too hard to do all the time.

“Operations of thought,” wrote the mathematician Alfred North Whitehead, “are like cavalry charges in battle—they are strictly limited in number, they require fresh horses, and must only be made in decisive moments.”

In this case, there is no need to send in the cavalry. The symbol pusher, thinking only about letters and parentheses, reaches the same summit in a few effortless strides

Can you imagine if English worked this way? An object’s name would indicate its physical size, so that a chihuahua (nine letters) would be three times the size of a cow (three letters). A food’s name would encode its recipe, so that a pizza would be a “DoughSauceCheeseBake.” Chemistry would be a tediously safe area of study, because we could run experiments simply by smushing together the names of various chemicals and seeing which ones spell “explosion.”

Symbol pushing boils the laws of logic down to laws of grammar. The language becomes a scale model of reality. We can wrangle ideas simply by wrangling ink.

So who was right, me or Kieran? The answer, of course, is both. To speak mathematics is to slip back and forth between two worlds, to inhabit two distinct frames of mind: the hard joy of thinking and the mindless trance of symbol pushing. Without the ink, the ideas are befuddling; but without the ideas, the ink means nothing. Learn the logic, learn it well, and then turn off your brain and let the symbols on the page dance to the silent music of the mind.


… notations often gain popularity precisely because they lend themselves to simple mechanical rules. You could say we choose the symbols for the express purpose of pushing them around. We can then generate right answers with no insight, no inspiration, no input other than elbow grease. Just turn the crank, and new knowledge pops out

When someone is too focused on details, we say they’re missing the forest for the trees. Mathematicians, if anything, are guilty of the opposite: missing the trees for the forest. Their habit is to inspect not objects themselves, but the objects’ properties. (Not the trees, but the number of them.) Then, having distilled these properties, they look at the properties’ properties. (Not the number, but its evenness or oddness.) And so on: the properties of the properties of the properties of the things. “Matter does not engage their attention,” Henri Poincaré once said of mathematicians. “They are interested in form alone.”


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.

February 25, 2025 03:51 AM

Access data persisted in Etcd with etcdctl and kubectl

I created the following CRD (Custom Resource Definition) with — kubectl apply -f crd-with-x-validations.yaml:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must be in the form: <plural>.<group>
  name: myapps.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: example.com
  scope: Namespaced
  names:
    # kind is normally the CamelCased singular type. 
    kind: MyApp
    # singular name to be used as an alias on the CLI
    singular: myapp
    # plural name in the URL: /apis/<group>/<version>/<plural>
    plural: myapps
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            x-kubernetes-validations: 
              - rule: "self.minReplicas <= self.maxReplicas"
                messageExpression: "'minReplicas (%d) cannot be larger than maxReplicas (%d)'.format([self.minReplicas, self.maxReplicas])"
            type: object
            properties:
              minReplicas:
                type: integer
              maxReplicas:
                type: integer

I want to check how the above CRD is persisted in Etcd.

I have two ways to do the job:

Option 1:

Use etcdctl to directly verify the persisted data in Etcd.1

My three steps process:

  • Exec inside the etcd pod in the kube-system namespace of your kubernetes cluster — kubectl exec -it -n kube-system etcd-kep-4595-cluster-control-plane -- /bin/sh
  • Create alias — alias e="etcdctl --endpoints 127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt"
  • Access the data — e get --prefix /registry/apiextensions.k8s.io/
sh-5.2# e get --prefix /registry/apiextensions.k8s.io/

/registry/apiextensions.k8s.io/customresourcedefinitions/shirts.stable.example.com
{"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1beta1","metadata":{"name":"shirts.stable.example.com","uid":"09696eb0-d58b-4a21-8820-b2230b13707e","generation":1,"creationTimestamp":"2025-02-21T12:38:19Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"shirts.stable.example.com\"},\"spec\":{\"group\":\"stable.example.com\",\"names\":{\"kind\":\"Shirt\",\"plural\":\"shirts\",\"shortNames\":[\"shrt\"],\"singular\":\"shirt\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"jsonPath\":\".spec.color\",\"name\":\"Fruit\",\"type\":\"string\"}],\"name\":\"v1\",\"schema\":{\"openAPIV3Schema\":{\"properties\":{\"spec\":{\"properties\":{\"color\":{\"type\":\"string\"},\"size\":{\"type\":\"string\"}},\"type\":\"object\"}},\"type\":\"object\"}},\"served\":true,\"storage\":true}]}}\n"},"managedFields":[{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:acceptedNames":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:conditions":{"k:{\"type\":\"Established\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamesAccepted\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:conversion":{".":{},"f:strategy":{}},"f:group":{},"f:names":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:scope":{},"f:versions":{}}}}]},"spec":{"group":"stable.example.com","version":"v1","names":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"scope":"Namespaced","validation":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"versions":[{"name":"v1","served":true,"storage":true}],"additionalPrinterColumns":[{"name":"Fruit","type":"string","JSONPath":".spec.color"}],"conversion":{"strategy":"None"},"preserveUnknownFields":false},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"storedVersions":["v1"]}}

Option 2:

Use kubectl to access the persisted data from Etcd –

kubectl get --raw /apis/apiextensions.k8s.io/v1/customresourcedefinitions/shirts.stable.example.com

> kubectl get --raw /apis/apiextensions.k8s.io/v1/customresourcedefinitions/shirts.stable.example.com

{"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1","metadata":{"name":"shirts.stable.example.com","uid":"09696eb0-d58b-4a21-8820-b2230b13707e","resourceVersion":"594","generation":1,"creationTimestamp":"2025-02-21T12:38:19Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"shirts.stable.example.com\"},\"spec\":{\"group\":\"stable.example.com\",\"names\":{\"kind\":\"Shirt\",\"plural\":\"shirts\",\"shortNames\":[\"shrt\"],\"singular\":\"shirt\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"jsonPath\":\".spec.color\",\"name\":\"Fruit\",\"type\":\"string\"}],\"name\":\"v1\",\"schema\":{\"openAPIV3Schema\":{\"properties\":{\"spec\":{\"properties\":{\"color\":{\"type\":\"string\"},\"size\":{\"type\":\"string\"}},\"type\":\"object\"}},\"type\":\"object\"}},\"served\":true,\"storage\":true}]}}\n"},"managedFields":[{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:acceptedNames":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:conditions":{"k:{\"type\":\"Established\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamesAccepted\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:conversion":{".":{},"f:strategy":{}},"f:group":{},"f:names":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:scope":{},"f:versions":{}}}}]},"spec":{"group":"stable.example.com","names":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"scope":"Namespaced","versions":[{"name":"v1","served":true,"storage":true,"schema":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"additionalPrinterColumns":[{"name":"Fruit","type":"string","jsonPath":".spec.color"}]}],"conversion":{"strategy":"None"}},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"storedVersions":["v1"]}}


  1. I realised while I’m accessing the same CRD data with etcdctl and kubectl, I’m getting a few things different in my output. In case of etcdctl — I get (i) "version":"v1", (ii) the CRD schema is stored in field "validation":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}} and (iii) and there’s a top level additionalPrinterColumns. While in case of kubectl— I don’t get the above bits, and instead I get both, the schema and the additionalPrinterColumns stored in the versions array - "versions":[{"name":"v1","served":true,"storage":true,"schema":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"additionalPrinterColumns":[{"name":"Fruit","type":"string","jsonPath":".spec.color"}]}]. This is (maybe) something to do with how currently (as of writing) Kubernetes stores/persists CRD v1 as v1beta1 in Etcd, because v1 takes more space to represent the same CRD (due to denormalization of fields among multi-version CRDs) and we have CRDs in the wild that are already bumping against the max allowed size (Thank you, Jordan Liggit for explaining this.) Read this2 and this3 for some context. 

  2. code block, where the encoding version for CRDs is configured 

  3. Attempt to bump the storage version from v1beta1 → v1, but was blocked on k/k PR #82292 

February 25, 2025 12:00 AM

slabtop - to check kernel memory usage, and kubelet's container_memory_kernel_usage metrics

Today, I learnt about slabtop1, a command line utility to check the memory used by kernel
(or as its man page says, it displays the kernel slab cache information in real time2).

image


(Logging for the future me)

So, what led to me learning about slabtop? 🙂

Jason Braganza taught me this, while we were preparing for our upcoming conference talk (on Kubernetes Metrics)!

Precisely, following is the metrics (exposed by kubelet component within a kubernetes cluster) that led to the discussion.

# HELP container_memory_kernel_usage Size of kernel memory allocated in bytes.
# TYPE container_memory_kernel_usage gauge
container_memory_kernel_usage{container="",id="/",image="",name="",namespace="",pod=""} 0 1732452865827

And how kubelet gets this “kernel memory allocation” information and feeds to the container_memory_kernel_usage metrics?

The answer is (at least to the best of my understanding) –

The kubelet’s server package imports “github.com/google/cadvisor/metrics”3 (aka the cadvisor/metrics) module.

This cadvisor/metrics go module provides a NewPrometheusCollector() function (which kubelet uses here4).

The NewPrometheusCollector() function take includedMetrics as one of its many parameters.

  r.RawMustRegister(metrics.NewPrometheusCollector(prometheusHostAdapter{s.host}, containerPrometheusLabelsFunc(s.host), includedMetrics, clock.RealClock{}, cadvisorOpts))

And when this includedMetrics contains cadvisormetrics.MemoryUsageMetrics (which it does in the case in question, check here5),

	includedMetrics := cadvisormetrics.MetricSet{
		...
		cadvisormetrics.MemoryUsageMetrics:  struct{}{},
		...
	}

then NewPrometheusCollector() function exposes the container_memory_kernel_usage6 metrics.

func NewPrometheusCollector(i infoProvider, f ContainerLabelsFunc, includedMetrics container.MetricSet, now clock.Clock, opts v2.RequestOptions) *PrometheusCollector {
        ...
        ...
	if includedMetrics.Has(container.MemoryUsageMetrics) {
		c.containerMetrics = append(c.containerMetrics, []containerMetric{
		       ...
		       ...
		       {
				name:      "container_memory_kernel_usage",
				help:      "Size of kernel memory allocated in bytes.",
				valueType: prometheus.GaugeValue,
				getValues: func(s *info.ContainerStats) metricValues {
					return metricValues
				},
			},
        ...
        ...

And as we see above in the definition of container_memory_kernel_usage metrics,
the valueType is prometheus.Gaugevalue (so its a gauge type metrics),
and the value is value: float64(s.Memory.KernelUsage) where KernelUsage is defined here7 and interpreted here8.

I still feel I can go further down a few more steps to find out the true source of this information, but that’s all for now.


  1. which in turn gets the information from /proc/slabinfo

  2. to me it looks something like, top or htop

  3. here: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L39C2-L39C38 

  4. kubelet registring the cadvisor metrics provided by cdvisor’s metrics.NewPrometheusCollector(...) function – https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L463 

  5. Kubelet server package creating a (cadvisor metrics based) set of includedMetrics: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L441-L451 

  6. codeblock adding container_memory_kernel_usage metrics: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/vendor/github.com/google/cadvisor/metrics/prometheus.go#L371-L393 

  7. Cadvisor’s MemoryStats struct providing KernelUsage: https://github.com/google/cadvisor/blob/5bd422f9e1cea876ee9d550f2ed95916e1766f1a/info/v1/container.go#L430-L432 

  8. Cadvisor’s setMemoryStats() function, setting value for KernelUsage: https://github.com/google/cadvisor/blob/5bd422f9e1cea876ee9d550f2ed95916e1766f1a/container/libcontainer/handler.go#L799-L803 

February 24, 2025 12:00 AM

I quit everything

I quit everything

I quit everything

title: I quit everything
published: 2025-02-19

previous [‘Simple blogging engine’] parent directory

I have never been the social media type of person. But that doesn’t mean I don’t want to socialize and get/stay in contact with other people. So although not being a power-user, I always enjoyed building and using my online social network. I used to be online on ICQ basically all my computer time and I once had a rich Skype contact list.

However, ICQ just died because people went away to use other services. I remember how excited I was when WhatsApp became available. To me it was the perfect messenger; no easier way to get in contact and chat with your friends and family (or just people you somehow had in your address book), for free. All of those services I’ve ever been using followed one of two possible scenarios:

  • Either they died because people left for the bigger platform
  • or the bigger platform was bought and/or changed their terms of use to make any further use completely unjustifiable (at least for me)

Quitstory

  • 2011 I quit StudiVZ, a social network that I joined in 2006, when it was still exclusive for students. However, almost my whole bubble left for Facebook so to stay in contact I followed. RIP StudiVZ, we had a great time.
  • Also 2011 I quit Skype, when it was acquired by Microsoft. I was not too smart back then, but I already knew I wanted to avoid Microsoft. It wasn’t hard anyway, most friends had left already.
  • 2017 I quit Facebook. That did cost me about half of my connections to old school friends (or acquaintances) and remote relatives. But the terms of use (giving up all rights on any content to Facebook) and their practices (crawling all my connections to use their personal information against them) made it impossible for me to stay.
  • 2018 I quit WhatsApp. It was a hard decision because, as mentioned before, I was once so happy about this app’s existence, and I was using it as main communication channel with almost all friends and family. But 2014 WhatsApp was bought by Facebook. In 2016 it was revealed that Facebook was combining the data from messenger and Facebook platform for targeted advertising and announced changes on terms of use. For me it was not possible to continue using the app.
  • Also 2018 I quit Twitter. Much too late. It has been the platform that allowed the rise of an old orange fascist, gave him the stage he needed and did by far not enough against false information spreading like crazy. I didn’t need to wait for any whistle blowers to know that the recommendation algorithm was favoring hate speech and miss-information, to know that this platform was not good for my mental health, anyway. I’m glad though, I was gone before the takeover.
  • Also 2018 I quit my Google account. I was using it to run my Android phone, mainly. However, quitting Google never hurt me - syncing my contacts and calendars via cardDAV and calDAV has always been painless. Google circles (which I peeked into for a week or so) never became a think anyway. I started using custom roms (mainly Cyanogen, later lineage OS) for all my phones anyway.
  • 2020 I quit Amazon. Shopping is actually more fun again. I still do online shopping occasionally, most often trying to buy from the manufacturers directly, but if I can I try to do offline shopping in our beautiful city.
  • 2021 I quit smartphone. I just stopped using my phone for almost anything except making and receiving calls. I have tried a whole bunch of things to gain control over the device but found that it was impossible for me. I found that the device had in fact more control over me than vice versa; I had to quit.
  • 2024 I quit Paypal. It’s a shame that our banks cannot come up with a convenient solution, and it’s also a shame I helped to make that disgusting person who happens to own Paypal even richer.
  • Also in 2024 I quit Github. It’s the biggest code repository in the world. I’m sure it’s the biggest hoster of FOSS projects, too. Why? Why sell that to a company like Microsoft? I don’t want to have a Microsoft account. I had to quit.

Stopped using the smartphone

Implications

Call them as you may; big four, big five, GAFAM/FAAMG etc. I quit them all. They have a huge impact on our live, and I think it’s not for the better. They all have shown often enough, that they cannot be trusted; they gather and link all information about us they can lay hands on and use them against us, selling us out for the highest bidding (and the second and third highest, because copying digital data is cheap). I’m not regretting my decisions, but they were not without implications. And in fact I am quite pissed because I don’t think it is my fault that I had to quit. It is something that those big tech companies took from me.

  • I lost contact to a bunch of people. Maybe this is a FOMO kind of thing; it’s not that I was in contact with these distant relatives or acquaintances, but I had a low threshold of reaching out. Not so much, anymore.
  • People are reacting angrily if they find they cannot reach me. I am available via certain channels, but a lot of people don’t understand my reasoning to not join the big networks. As if I was trying to make their lives more complicated as necessary.
  • I can’t do OAuth. If online platforms don’t implement their own login and authentication but instead rely on identification via the big IdPs, I’m out. Means I will probably not be able to participate in Advent of Code this year. It’s kind of sad.
  • I’m the last to know. Not being in that WhatsApp group, and not reading the Signal message about the meeting cancellation 5 minutes before scheduled start (because I don’t have Signal on my phone), does have that effect. There has been a certain engagement once, when you agreed to something or scheduled a meeting etc. But these days, everything can be changed and cancelled just minutes before some appointment with a single text message. I feel old(fashioned) when trusting in others’ engagement, but I don’t want to give it up, yet.

Of course there is still potential to quit even more: I don’t have a Youtube account (of course) but I still watch videos there. I do have a Netflix subscription, and cancelling that would put me into serious trouble with my family. I’m also occasionally looking up locations on Google maps, but only if I want to look at the satellite pictures.

However, the web is becoming more and more bloated with ads and trackers, old pages that were fun to browse in the earlier days of the web have vanished; it’s not so much fun to use anymore. Maybe the HTTP/S will be the next thing for me to quit.

Conclusions

I’m still using the internet to read my news, to connect with friends and family and to sync and backup all the stuff that’s important to me. There are plenty of alternatives to big tech that I have found work really well for me. The recipe is almost always the same: If it’s open and distributed, it’s less likely to fall into the hands of tech oligarchs.

I’m using IRC, Matrix and Signal for messaging, daily. Of those, Signal may have the highest risk of disappointing me one day, but I do have faith. Hosting my own Nextcloud and Email servers has to date been a smooth and nice experience. Receiving my news via RSS and atom feeds gives me control over the sources I want to expose myself to, without being flooded with ads.

I have tried Mastodon and other Fediverse networks, but I was not able to move any of my friends there to make it actual fun. As mentioned, I’ve never been too much into social media, but I like(d) to see some vital signs of different people in my life from time to time. I will not do bluesky, as I cannot see how it differs from those big centralized platforms that have failed me.

It’s not a bad online-life and after some configuration it’s no harder to maintain than any social media account, too. I only wish it wouldn’t have been necessary for me to walk this path. The web could have developed much differently, and be an open and welcoming space for everyone today. Maybe we’ll get there someday.

February 19, 2025 12:00 AM

Here’s What is Stopping Us From Using Free Software

A couple of thoughts on Robin’s post.
Because I’ve spent a lifetime using both FOSS and proprietary software and more than two decades, advising and supporting people and businesses who do the same.

  1. To paraphrase Robin’s own point. Most people don’t care.1

If it runs Minetest, then it’s fine.

aka “If it does the thing, I want it to do, without too much cognitive overhead, I’ll use it.”
Learning the skill, so I can do this task, is definitely not what the world at large wants to do. Nor do I imagine, do most folk have the time and the bandwidth. iTunes (and Netflix) when they launched2, were the prime examples of this. Convenience and ease of use trumps Free. Also by that same token, VLC is probably one of the most installed and used video players in the world, not because they are FOSS, but because they make it easy to install and use.

  1. Some people feel strongly about having control over what they do, over their data. Those folks were those I found easiest to move to Free software, because it aligned with what they wanted.

  2. Support costs are about the same.
    If a small business chooses to use free software, the cost of supporting it, if stuff breaks, (specially if it’s business critical3) probably is about the same as paying for the proprietary version. The fact that paying and getting it fixed benefits the world at large is lost on most. Or maybe it’s something, we Free Software evangelists could learn to to better. Robin promises his team, first-level support, but what happens when he goes on to better opportunities and is no longer around?

So I agree with most of what Robin says, with just one caveat.

People aren’t afraid of change. Atleast not the kind of change, Free Software poses.
People just don’t have the time and the energy to deal with change, with everything else happening in their lives.
Free software is just not as important to them, as it is to us. That is all.4


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. And I’d venture to say, that’s perfectly ok. ↩︎

  2. before they went to pot ↩︎

  3. rpm updates, bork my expensive garment printing machine, anyone? ↩︎

  4. And once again, that’s perfectly ok too. ↩︎

February 18, 2025 06:43 AM

Simple blogging engine

Simple blogging engine

Simple blogging engine

title: Simple blogging engine
published: 2025-02-18

previous [‘Blog Questions Challenge 2025’] next [‘I quit everything’] parent directory

As mentioned in the previous post, I have been using several frameworks for blogging. But the threshold to overcome to begin and write new articles were always too hight to just get started. Additionally, I’m getting more and more annoyed by the internet, or specifically browsing the www via HTTP/S. It’s beginning to feel like hard work to not get tracked everywhere and to not support big tech and their fascist CEOs by using their services. That’s why I found the gemini protocol interesting ever since I got to know about it. I wrote about it before:

Gemini blog post

That’s why I decided to not go for HTTPS-first in my blog, but do gemini first. Although you’re probably reading this as the generated HTML or in your feed reader.

Low-threshold writing

To just get started, I’m now using my tmux session that is running 24/7 on my home server. It’s the session I open by default on all my devices, because it contains my messaging (IRC, Signal, Matrix) and news (RSS feeds). Not it also contains a neovim session that let’s me just push all my thoughts into text files easily and everywhere.

Agate

The format I write in is gemtext, a markup language that is even simpler as Markdown. Gemtext allows three different headings, links, lists, blockquotes and formatted text, and that’s it. And to make my life even easier, I only need to touch a file .directory-listing-ok to let agate create an autoindex of each directory, so I don’t have to take care about house-keeping and linking my articles too much. I just went with this scheme to make sure my posts appear in a correct order:

blog
└── 2025
    ├── 1
    │   └── index.gmi
    └── 2
        └── index.gmi

When pointed to a directory, agate will automatically serve the index.gmi if it finds one.

To serve the files in my gemlog, I just copy them as is, using rsync. If anyone would browse the gemini space I would be done at this point. I’m using agate, a gemini server written in Rust, to serve the static blog. Technically, gemini would allow more than that, using cgi to process requests and dynamically return responses, but simple is just fine.

The not-so-low publishing threshold

However, if I ever want any person to actually read this, sadly I will have to offer more than gemtext. Translating everything into HTML and compiling an atom.xml comes with some more challenges. Now I will need some metadata like title and date. For now I’m just going to add that as formatted text at the beginning of each file I want to publish. The advantage is, that I can filter out files I want to keep private this way. Using ripgrep I just find all files with the published directive and pipe them through my publishing script.

To generate the HTML, I’m going the route gemtext -> markdown -> html, in lack of better ideas. Gemtext to Markdown is trivial, I only need to format the links (using sed in my case). To generate the HTML I use pandoc, although it’s way too powerful and not-lightweight for this task. But I just like pandoc. I’m adding simple.css to I don’t have to fuddle around with any design questions.

Simplecss

I was looking for an atom feed generator, until I noticed how easily this file can be generated manually. Again, a little bit of ripgrep and bash leaves me with an atom.xml that I’m actually quite happy with.

The yak can be shaved until the end of the world

I hope I have put everything out of the way get started easily and quickly. I could configure the system until the end of time to make unimportant things look better, but I don’t want to fall into that trap (again). I’m going to publish my scripts to a public repository soon, in case anyone feels inspired to go a similar route.

February 18, 2025 12:00 AM

Blog Questions Challenge 2025

Blog Questions Challenge 2025

Blog Questions Challenge 2025

title: Blog Questions Challenge 2025
published: 2025-02-18
tags: blogging

previous [‘What is stopping us from using free software?’] next [‘Simple blogging engine’] parent directory

I’m picking up the challenge from Jason Braganza. If you haven’t, go visit his blog and subscribe to the newsletter ;)

Jason’s Blog

1. Why did you make the blog in the first place?

That’s been the first question I asked myself when starting this blog. It was part of the DGPLUG #summertraining and I kind of started without actually knowing what to do with it. But I did want to have my own little corner in cyberspace.

Why another blog?

2. What platform are you using to manage your blog and why did you choose it?

I have a home server running vim in a tmux session. The articles are written as gemtext as I have decided that my gemlog should be the source of truth for my blog. I’ve written some little bash scripts to convert everything to html and atom feed as well, but I’m actually not very motivated anymore to care for website design. Gemtext is the simplest markup language I know and to keep it simple makes the most sense to me.

Gemtext

3. Have you blogged on other platforms before?

I started writing on wordpress.com; without running my own server it has been the most accessible platform to me. When moving to my own infrastructure I used Lektor, a static website generator framework written in Python. It has been quite nice and powerful, but in the end I wanted to get rid of the extra dependencies and simplify even more.

Lektor

4. How do you write your posts?

Rarely. If I write, I just write. Basically the same way I would talk. There were a very few posts when I did some research because I wanted to make it a useful and comprehensive source for future look-ups, but in most cases I’m simply too lazy. I don’t spend much time on structure or thinking about how to guide the reader through my thoughts, it’s just for me and anyone who cares.

5. When do you feel most inspired to write?

Always in situations when I don’t have the time to write, never when I do have the time. Maybe there’s something wrong with me.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

Yes, mostly. I do have a couple of posts that I didn’t publish immediately, so they are still not published. I find it hard to re-iterate my own writing, so I try to avoid it by publishing immediately :)

7. Your favorite post on your blog?

The post I was looking up myself most often is the PostgreSQL migration thing. It was a good idea to write that down ;)

Postgres migration between multiple instances

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

I just did a major refactoring of the system, basically doing everying manually now. It forces me to keep things simple, because I think it should be simple to write and publish a text online. I also hope to have lowered the threshold for me to start writing new posts. So piloting the current system, it is.

February 18, 2025 12:00 AM

TIL: what is structual schema, and how to drop unknown fields in a custom resource (CR)

Today, during our pair-(learning/programming) session for the KEP 4595 aka, CEL for CRD AdditionalPrinterColumns, Sreeram and I, came across this article – Future of CRDs: Structural Schemas.
(It’s an old article from 2019, written by Dr. Stefan Schimanski1. I will check if & what anything changed since 2019, but still reading this article in its current state, itself, was a turning point for me w.r.t my understanding of CRD(s).)

For the first time today, I understood - what is structual schema (something that I keep reading about and keep finding it refereneced everywhere within the kubernetes codebase, over and over again.)

So, Today I Learnt (TIL)2:

An OpenAPI v3 schema is a Structual Schema, if:

  1. the core of an OpenAPI v3 schema, is made out of the following 7 constructs:
    • properties
    • items
    • additionalProperties
    • type
    • nullable
    • title
    • descriptions

In addition, all types must be non-empty, and in each sub-schema only one of properties, additionalProperties or items may be used.

  1. if all types are defined

  2. the core is extended with value validation following the constraints:

    • (i) inside of value validations, there’s no additionalProperties, type, nullable, title, description
    • (ii) all fields mentioned in value validation are specified in the core.

Example of a structual schema (understand below as a snippet from a CRD definition file, in the CRD schema field):

  type: object
  properties:
    spec:
      type: object
      properties
        command:
          type: string
          minLength: 1                          # value validation
        shell:
          type: string
          minLength: 1                          # value validation
        machines:
          type: array
          items:
            type: string
            pattern: “^[a-z0-9]+(-[a-z0-9]+)*$” # value validation
      oneOf:                                    # value validation
      - required: [“command”]                   # value validation
      - required: [“shell”]                     # value validation
  required: [“spec”]                            # value validation

This schema is structural:

  • because we have only used the permitted OpenAPI constructs (Rule 1).
  • and each field (like spec, command, shell, machines, …) has their types defined (Rule 2).
  • plus, all value validations also follow the above defined rules (Rule 3).

Example of a non-structural schema (understand below as a snippet from a CRD definition file, in the CRD schema field):

  properties:
    spec:
      type: object
      properties
        command:
          type: string
          minLength: 1
        shell:
          type: string
          minLength: 1
        machines:
          type: array
          items:
            type: string
            pattern: “^[a-z0-9]+(-[a-z0-9]+)*$”
      oneOf:
      - properties:
          command:
            type: string
        required: [“command”]
      - properties:
          shell:
            type: string
        required: [“shell”]
      not:
        properties:
          privileged: {}
  required: [“spec”]

This spec is non-structural for many reasons:

  • type: object at the root is missing (Rule 2).
  • inside of oneOf it is not allowed to use type (Rule 3-i).
  • inside of not the property privileged is mentioned, but it is not specified in the core (Rule 3-ii).

The non-structural schema, highlighted a big problem (before we had structural schema in place), that is - if we can’t use not to indicate the Kubernetes API server to drop the privileged field, then it will be forever preserved by the API server in Etcd.

This was fixed by Pruning. (which is not on by default in apiextensions.k8s.io/v1 but had to be explicitly enabled in apiextensions.k8s.io/v1beta1).

Pruning in apiextensions.k8s.io/v1beta1 is enabled via:

  apiVersion: apiextensions/v1beta1
  kind: CustomResourceDefinition
  spec:
    …
    preserveUnknownFields: false


  1. well, while searching for links to add as hyperlink for Dr. Stefan Schimanski, I came across more articles written by him, so, those are for my ToDos now. 

  2. This’s just me rephrasing, all what I learnt from and is mentioned in the original article – Future of CRDs: Structural Schemas

February 17, 2025 12:00 AM

The final release of Kubernetes Mandala

The Kubernetes v1.29.0, aka the Kubernetes Mandala was released on December 13, 2023. This marked the last (major) Kubernetes minor version release of the year, 2023.
(The original release date was December 5, 2023, but then we had a few rounds of un-avoidable delays.) 🙂

This is a very special release for me. I had the honor of being the Release Lead for Kubernetes v1.29 Release cycle, which had a team of 40 ever amazing folks.
This release, like all others, also had a very special theme & logo — thanks to Jason Braganza. He shared the story – On How the Kubernetes v1.29 Logo Came About.

Kubernetes Mandala Logo Logo — Kubernetes Mandala

Today, February 13, 2025, after an year and 2 months later, marks the final release of Kubernetes 1.29 aka, the v1.29.14.

Kubernetes v1.24.9

I’m really grateful for my experience during the 4 months from September to December 2023. It had its ups and downs, but I learned a lot lot.
Most importantly, it helped me become more confident and comfortable in having conversations, discussions, and especially, in finding solutions – both on regular days and especially on tough ones.

February 13, 2025 12:00 AM

pass using stateless OpenPGP command line interface

Yesterday I wrote about how I am using a different tool for git signing and verification. Next, I replaced my pass usage. I have a small patch to use stateless OpenPGP command line interface (SOP). It is an implementation agonostic standard for handling OpenPGP messages. You can read the whole SPEC here.

Installation

cargo install rsop rsop-oct

And copied the bash script from my repository to the path somewhere.

The rsoct binary from rsop-oct follows the same SOP standard but uses the card to signing/decryption. I stored my public key in ~/.password-store/.gpg-key file, which is in turn used for encryption.

Usage

Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)

February 12, 2025 05:26 AM

Using openpgp-card-tool-git with git

One of the power of Unix systems comes from the various small tools and how they work together. One such new tool I am using for some time is for git signing & verification using OpenPGP and my Yubikey for the actual signing operation via openpgp-card-tool-git. I replaced the standard gpg for this usecase with the oct-git command from this project.

Installation & configuration

cargo install openpgp-card-tool-git

Then you will have to configuration your (in my case the global configuration) git configuration.

git config --global gpg.program <path to oct-git>

I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.

git config --global commit.gpgsign true
git config --global tag.gpgsign true

Usage

Before you start using it, you want to save the pin in your system keyring.

Use the following command.

oct-git --store-card-pin

That is it, now your git commit will sign the commits using oct-git tool.

In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.

February 11, 2025 11:12 AM

KubeCon + CloudNativeCon India 2024

Banner with KubeCon and Cloud Native Con India logos

Conference attendance had taken a hit since the onset of the COVID-19 pandemic. Though I attended many virtual conferences and was glad to present at a few like FOSDEM - a conference I had always longed to present at.

Sadly, the virtual conferences did not have the feel of in-person conferences. With 2024 here and being fully vaccinated, I started attending a few in-person conferences. The year started with FOSSASIA in Hanoi, Vietnam, followed by a few more over the next few months.

December 2024 was going to be special as we were all waiting for the first edition of KubeCon + CloudNativeCon in India. I had planned to attend the EU/NA editions of the conference, but visa issues made those more difficult to attend. As fate would have it, India was the one planned for me.

KubeCon + CloudNativeCon India 2024 took place in the capital city, Delhi, India from 11th - 12th December 2024, along with co-events hosted at the same venue, Yashobhoomi Convention Centre on 10th December 2024.

Venue

Let’s start with the venue. As a conference organizer for other conferences, the thing that blew my mind was the venue, YASHOBHOOMI (India International Convention and Expo Centre). The conference venue was huge to accommodate large scale conferences, and I also got to know that the convention centre is still in progress and there are more halls to come. If I heard correctly, there was another parallel conference running in the venue around the same time.

Now, let’s jump to the conference.

Maintainer Summit

The first day of the conference, 10th December 2024, was the CNCF Maintainers Summit. The event is exclusive for people behind CNCF projects, providing space to showcase their projects and meet other maintainers face-to-face.

Due to the chilly and foggy morning, the event started a bit late to accommodate more participants for the very first talk. The event had a total of six talks, including the welcome note. Our project, Flatcar Container Linux, also had a talk accepted: “A Maintainer’s Odyssey: Time, Technology and Transformation”.

This talk took attendees through the journey of Flatcar Container Linux from a maintainer’s perspective. It shared Flatcar’s inspiration - the journey from a “friendly fork” of CoreOS Container Linux to becoming a robust, independent, container-optimized Linux OS. The beginning of the journey shared the daunting red CI dashboard, almost-zero platform support, an unstructured release pipeline, a mammoth list of outdated packages, missing support for ARM architecture, and more – hardly a foundation for future initiatives. The talk described how, over the years, countless human hours were dedicated to transforming Flatcar, the initiatives we undertook, and the lessons we learned as a team. A good conversation followed during the Q&A with questions about release pipelines, architectures, and continued in the hallway track.

During the second half, I hosted an unconference titled “Special Purpose Operating System WG (SPOS WG) / Immutable OSes”. The aim was to discuss the WG with other maintainers and enlighten the audience about it. During the session, we had a general introduction to the SPOS WG and immutable OSes. It was great to see maintainers and users from Flatcar, Fedora CoreOS, PhotonOS, and Bluefin joining the unconference. Since most attendees were new to Immutable OSes, many questions focused on how these OSes plug into the existing ecosystem and the differences between available options. A productive discussion followed about the update mechanism and how people leverage the minimal management required for these OSes.

I later joined the Kubeflow unconference. Kubeflow is a Kubernetes-native platform that orchestrates machine learning workflows through custom controllers. It excels at managing ML systems with a focus on creating independent microservices, running on any infrastructure, and scaling workloads efficiently. Discussion covered how ML training jobs utilize batch processing capabilities with features like Job Queuing and Fault Tolerance - Inference workloads operate in a serverless manner, scaling pods dynamically based on demand. Kubeflow abstracts away the complexity of different ML frameworks (TensorFlow, PyTorch) and hardware configurations (GPUs, TPUs), providing intuitive interfaces for both data scientists and infrastructure operators.

Conference Days

During the conference days, I spent much of my time at the booth and doing final prep for my talk and tutorial.

On the maintainers summit day, I went to check the room for the conference days, but discovered that room didn’t exist in the venue. So, on the conference days, I started by informing the organizers about the schedule issue. Then, I proceeded to the keynote auditorium, where Chris Aniszczyk, CTO, Linux Foundation (CNCF), kicked off the conference by sharing updates about the Cloud Native space and ongoing initiatives. This was followed by Flipkart’s keynote talk and a wonderful, insightful panel discussion. Nikhita’s keynote on “The Cloud Native So Far” is a must-watch, where she talked about CNCF’s journey until now.

After the keynote, I went to the speaker’s room, prepared briefly, and then proceeded to the community booth area to set up the Flatcar Container Linux booth. The booth received many visitors. Being alone there, I asked Anirudha Basak, a Flatcar contributor, to help for a while. People asked all sorts of questions, from Flatcar’s relevance in the CNCF space to how it works as a container host and how they could adapt Flatcar in their infrastructure.

Around 5 PM, I wrapped up the booth and went to my talk room to present “Effortless Clustering: Rethinking ClusterAPI with Systemd-Sysext”. The talk covered an introduction to systemd-sysext, Flatcar & Cluster API. It then discussed how the current setup using Image Builder poses many infrastructure challenges, and how we’ve been utilizing systemd to resolve these challenges and simplify using ClusterAPI with multiple providers. The post-talk conversation was engaging, as we discussed sysext, which was new to many attendees, leading to productive hallway track discussions.

Day 2 began with me back in the keynote hall. First up were Aparna & Sumedh talking about Shopify using GenAI + Kubernetes for workloads, followed by Lachie sharing the Kubernetes story with Mandala and Indian contributors as the focal point. As an enthusiast photographer, I particularly enjoyed the talk presented through Lachie’s own photographs.

Soon after, I proceeded to my tutorial room. Though I had planned to follow the Flatcar tutorial we have, the AV setup broke down after the introductory start, and the session turned into a Q&A. It was difficult to regain momentum. The middle section was filled mostly with questions, many about Flatcar’s security perspective and its integration. After the tutorial wrapped up, lunch time was mostly taken up by hallway track discussions with tutorial attendees. We had the afternoon slot on the second day for the Flatcar booth, though attendance decreased as people began leaving for the conference’s end. The range of interactions remained similar, with some attendees from talks and workshops visiting the booth for longer discussions. I managed to squeeze in some time to visit the Microsoft booth at the end of the conference.

Overall, I had an excellent experience, and kudos to the organizers for putting on a splendid show.

Takeaways

Being at a booth representing Flatcar for the first time was a unique experience, with a mix of people - some hearing about Flatcar for the first time and confusing it with container images, requiring explanation, and others familiar with container hosts & Flatcar bringing their own use cases. Questions ranged from update stability to implementing custom modifications required by internal policies, SLSA, and more. While I’ve managed booths before, this was notably different. Better preparation regarding booth displays, goodies, and Flatcar resources would have been helpful.

The talk went well, but presenting a tutorial was a different experience. I had expected hands-on participation, having recently conducted a successful similar session at rootconf. However, since most KubeCon attendees didn’t bring computers, I plan to modify my approach for future KubeCon tutorials.

At the booth, I also received questions about WASM + Flatcar, as Flatcar was categorized under WASM in the display.


Credits in the photos goes to CNCF posted in the Kubecon + CloudNativeCon India 2024 Flickr album & to @vipulgupta.travel

February 05, 2025 12:00 AM

Blog Questions Challenge 2025

Ava started this, Kev modified this, and Saptak egged me on to write this. So here goes …

screenshot of my homepage

What the home page looks like in 2025


1. Why did you make the blog in the first place?

I write for me mostly. Because writing helps me think. My thoughts are too scattered otherwise. I can’t not write. I’ve always written. Privately, publicly, there’s always been some place where I’ve jotted things down.

2. What platform are you using to manage your blog and why did you choose it?

I use Hugo to generate the site, which I host on my own Hetzner VM. I use it because I outgrew my previous tool, Nikola, which still holds a dear place in my heart. While Hugo is enormously complex, it is also deceptively simple enough to get started with. And it’s fast. That’s what I love about it. It lets me write. It does not get in my way. It lets me preview what I’m doing with its live server. And it’s unbelievably fast.

3. Have you blogged on other platforms before?

I’ve been writing in some form or other since the late 90s. So … yea :)
Livejournal, Blogger, self hosted Wordpress, wordpress.com, Posterous, Tumblr, self hosted Wordpress, self hosted Ghost, Nikola and now Hugo. It’s been quite a ride!

4. How do you write your posts?

I write them in Emacs (in Markdown, using Markdown Mode) on my desktop, with Hugo Server running alongside giving me a preview of what things will look like. Once I commit it to my self hosted Forgejo instance, an action publishes the site automatically

5. When do you feel most inspired to write?

I never do. I write because it helps me function. And yet, it always feels like a chore.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

I always publish it immediately. I almost never write something that is deeply thought out, that needs to stand the test of time. It’s the process, the writing, the putting words out of my mind, through my fingers down to paper, that leads to the result, the thought, the opinion, the aha, the insight. So I’m never quite done. Which means if I ever wait for finished, the post will never get published. The moment I publish something, is invariably the moment something needs changing. So I just go back and edit it. I never notify folks about updating things on the first day. If I ever edit something much later than a day or two, then I do.

7. Your favorite post on your blog?

None. All. They’re my thoughts, so depending on my mood, they’re either worthless or priceless gems!

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

Not really. For all my wandering, I’ve only ever moved when my tools outgrew me1, or I outgrew my tools. For now, Hugo does all I ask of it, without getting in the way. The day that changes, will be the day I move.


I’ll ask Priyanka, Sreeram, Sandeep, Pradhvan, Rahul, Bhavin, Elia, Mandar, Saptak, Farhaan, Robin and Kushal to share more, if they have the time, energy and the inclination.

Folks, who’ve answered my call to arms! Go read their answers too!


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. Wordpress.com too restrictive, Ghost stopped serving my specific needs etc. ↩︎

February 04, 2025 06:40 AM

On Neil Gaiman

Considering how much I’ve quoted Neil Gaiman (here, here, here, here, here and many other places on the blog) and how much his stories have influenced me, I feel a bit obligated to put this personal statement out.

What he did was really, really wrong! Horrifyingly wrong!
The girls, the women, were wronged. Grossly so. Often violently so.

Never meet your heroes and idols with feet of clay and all that.

So I’ve given away (or deleted) all my Gaiman books, save two. My collected editions of Sandman. And my signed copy of What you Need to Be Warm.
While it is true that Gaiman shot to stardom with Sandman, that was not the reason I bought this collected edition. I bought it for the young boy, who would scramble the lanes of Matunga and Fort, looking for more erudite comics after reading Moore’s V for Vendetta and Watchmen. Sandman was something I discovered on my own and enjoyed so much.
Besides it was never about the writing at that stage. It was the stories. From all over the world and across cultures. That he’d reimagine for Sandman. (Ramadan, A Midsummer Night’s Dream, Thermidor, The Dream Hunters … ) And even more importantly it was the pictures, the drawings, the gorgeous art. (Yoshitaka Amano, Dave McKean, Todd Klein, and all the others) So Sandman stays. And with What You Need to Be Warm, the money went to a smol shop and to a good cause both. So I don’t feel bad owning it.

“Man’s not dead while his name is still spoken” — Terry Pratchett

And so this is the last, I speak your name. You’re dead to me.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


February 03, 2025 08:18 AM

Pixelfed on Docker

I am running a Pixelfed instance for some time now at https://pixel.kushaldas.photography/kushal. This post contains quick setup instruction using docker/containers for the same.

screenshot of the site

Copy over .env.docker file

We will need .env.docker file and modify it as required, specially the following, you will have to write the values for each one of them.

APP_NAME=
APP_DOMAIN=
OPEN_REGISTRATION="false"   # because personal site
ENFORCE_EMAIL_VERIFICATION="false" # because personal site
DB_PASSWORD=

# Extra values to db itself
MYSQL_DATABASE=
MYSQL_PASSWORD=
MYSQL_USER=

CACHE_DRIVER="redis"
BROADCAST_DRIVER="redis"
QUEUE_DRIVER="redis"
SESSION_DRIVER="redis"

REDIS_HOST="redis"

ACITIVITY_PUB="true"

LOG_CHANNEL="stderr"

The actual docker compose file:

---

services:
  app:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    depends_on:
      - db
      - redis
    # The port statement makes Pixelfed run on Port 8080, no SSL.
    # For a real instance you need a frontend proxy instead!
    ports:
      - "8080:80"

  worker:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    entrypoint: /worker-entrypoint.sh
    depends_on:
      - db
      - redis
      - app
    healthcheck:
      test: php artisan horizon:status | grep running
      interval: 60s
      timeout: 5s
      retries: 1

  db:
    image: mariadb:11.2
    restart: unless-stopped
    env_file:
      - ./.env
    environment:
      - MYSQL_ROOT_PASSWORD=CHANGE_ME
    volumes:
      - "/data/db-data:/var/lib/mysql"

  redis:
    image: zknt/redis
    restart: unless-stopped
    volumes:
      - "redis-data:/data"

volumes:
  redis-data:

I am using nginx as the reverse proxy. Only thing to remember there is to pass .well-known/acme-challenge to the correct directory for letsencrypt, the rest should point to the contianer.

January 31, 2025 05:44 AM

Dealing with egl_bad_alloc error for webkit

I was trying out some Toga examples, and for the webview I kept getting the following error and a blank screen.

Could not create EGL surfaceless context: EGL_BAD_ALLOC.

After many hours of searching I reduced the reproducer to a simple Python Gtk code.

import gi

gi.require_version('Gtk', '3.0')
gi.require_version('WebKit2', '4.0')

from gi.repository import Gtk, WebKit2

window = Gtk.Window()
window.set_default_size(800, 600)
window.connect("destroy", Gtk.main_quit)

scrolled_window = Gtk.ScrolledWindow()
webview = WebKit2.WebView()
webview.load_uri("https://getfedora.org")
scrolled_window.add(webview)

window.add(scrolled_window)
window.show_all()
Gtk.main()

Finally I asked for help in #fedora IRC channel, within seconds Khaytsus gave me the fix:

WEBKIT_DISABLE_COMPOSITING_MODE=1 python g.py

working webview

January 18, 2025 07:43 AM

pastewindow.nvim my first neovim plugin

pastewindow is a neovim plugin written in Lua to help to paste text from a buffer to a different window in Neovim. This is my first attempt of writing a plugin.

We can select a window (in the GIF below I am using a bash terminal as target) and send any text to that window. This will be helpful in my teaching sessions. Specially modifying larger Python functions etc.

demo

I am yet to go through all the Advent of Neovim videos from TJ DeVries. I am hoping to improve (and more features) to the plugin after I learn about plugin development from the videos.

December 27, 2024 08:19 AM

Keynote at PyLadiesCon!

Since the very inception of my journey in Python and PyLadies, I have always thought of having a PyLadies Conference, a celebration of PyLadies. There were conversations here and there, but nothing was fruitful then. In 2023, Mariatta, Cheuk, Maria Jose, and many more PyLadies volunteers around the globe made this dream come true, and we had our first ever PyLadiesCon.
I submitted a talk for the first-ever PyLadiesCon (how come I didn&apost?), and it was rejected. In 2024, I missed the CFP deadline. I was sad. Will I never be able to participate in PyLadiesCon?

On October 10th, 2024, I had my talk at PyCon NL. I woke up early to practice. I saw an email from PyLadiesCon, titled "Invitation to be a Keynote Speaker at PyLadiesCon". The panic call went to Kushal Das. "Check if there is any attack in the Python server? I got a spamming email about PyLadiesCon and the address is correct. "No, nothing.", replied Kushal after checking. Wait then "WHAT???". PyLadiesCon wants me to give the keynote. THE KEYNOTE in PyLadiesCon.

Thank you Audrey for conceptualizing and creating PyLadies, our home.

keynote_pyladiescon.png

And here I am now. I will give the keynote on 7 December 2024 at PyLadiesCon on how PyLadies gave me purpose. See you all there.

Dreams do come true.

by Anwesha Das at November 29, 2024 05:35 PM

Looking back to Euro Python 2024

Over the years, when  I am low, I always go to the 2014 Euro Python talk  "Farewell and Welcome Home: Python in Two Genders" by Naomi. It has become the first step of my coping mechanism and the door to my safe house. Though 2024 marked my Euro Python journey in person, I had a long connection and respect for the conference. A conference that believes community matters, human values and feelings matter, and not afraid to walk the talk. And how the conference stood up to my expectations in every bit.

euro_python_3.jpeg

My Talk: Intellectual Property Law 101

I had my talk on Intellectual Property Law, on the first day. After a long time, I was giving a talk on the legal topic. This talk was dedicated to the developers. So, I concentrated on only those issues which concerned the developers. Tried to stitch the concerned topics Patent, Trademarks, and Copyright together. For the smooth flow of the talk, since it becomes easier for the developers to understand and remember for all the practical purposes for future use. I was concerned if I would be able to connect with people. Later, people came to  me with several related questions, starting from

  • Why should I be concerned about patents?

  • Which license would fit my project?

  • Should I be scared about any Trademarks granted to other organizations under some other jurisdiction?

So on and so forth. Though I could not finish the whole talk due to time constraints, I am happy with the overall review.

Panel: Open Source Sustainability

On Day 1 of the main conference, we had the panel on Open Source Sustainability. This topic lies at the core of open-source ecosystem sustainability for the projects and community for the future and stability. The panel had Deb Nicholson, Armin Ronacher Çağıl Uluşahin Sönmez,Deb Nicholson, Samuel Colvin, and me and Artur Czepiel as  the moderator.  I was happy to represent my community&aposs side. It was a good discussion, and hopefully, we could give answers to some questions of the community in general.

Birds of Feather session: Open Source Release Management

This Birds of Feathers (BoF) session is intended to deal with the Release Management of various Open Source projects, irrespective of their size. The discussion includes all projects, from a community-led project to projects maintained/initiated by big enterprises, from a project maintained by one contributor to a project with several hundred contributors.

  • What methods do we follow regarding versioning, release cadence, and the process?

  • Do most of us follow manual processes or depend on automated ones?

  • What works and what does not, and how can we improve our lives?

  • What are the significant points that make the difference?

We discussed and covered the following topics: different aspects of release management of Open-Source projects, security, automation, CI usage, and documentation. We followed the Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

PyLadies Lunch

And then comes my favorite part of the conference: PyLadies Lunch. It was my seventh PyLadies lunch, and I was moderating it for the fifth time. But this time, my wonderful friends [Laís] and Çağıl were by my side, holding me up when I failed. I love every time I am at a PyLadies lunch. This is where I get my strength, energy, and love.

Workshop

I attended two workshops organized by Anezka Muller , Mia Bajić and all amazing PyLadies organizers

  • Self-defense workshop where the moderators helped us navigate challenging situations we face in life, safeguard ourselves from them, and overcome them.

  • I AM Remarkable workshop, where we learned to tell people about our successes.

Representing Ansible Community

I always take the chance to meet the Ansible community members face-to-face. Euro Python gave me another opportunity to do that. I learned about different user stories that we do not get to hear from our work corners, and I learned about these unique problems and their solutions in Ansible. 
Fun fact : Maarten gave a review after knowing I am Anwesha from the Ansible project. He said, &aposCan you Ansible people slow down in releasing new versions of Ansible? Every time we get used to it, we have a new version.&apos

euro_python_1.jpeg

Acknowledging mental health issues

The proudest moment for me personally was when I acknowledged my mental health issues and later when people came to me saying how they relate to me and how they felt empowered when I mentioned this.

euro_python_2.jpeg

PyLadies network at Red Hat

A network of PyLadies within Red Hat has been my dream since I joined Red Hat. She also agreed when I shared this with Karolina at last year&aposs DevConf. And finally, we initiated on day 2 of the conference. We are so excited for the future to come.

Meeting friends

Conference means friends. It was so great to meet so many friends after such a long time Tylor, Nicholas, Naomi, Honza, Carol, Mike, Artur, Nikita, Valerio and many new ones Jannis Joana,[Chirstian], Martina Tereza , Maria, Alyona, Mia, Naa , Bojanand Jodie. A special note of love to Jodie, you to hold my hand and take me out of the dark.

euro_python_4.jpeg

The best is saved for the last. Euro Python 2024 made 3 of my dreams come true.

  • Gender Neutral Washrooms

  • Sanitary products in restrooms (I remember carrying sanitary napkins in my bag pack in PyCon India and telling girls if they needed it, it was available in the PyLadies booth).

  • Neo-diversity bag (which saved me at the conference; thank you, Karolina, for this)

euro_python_0.jpeg

I cannot wait for the next Euro Python; see you all at Euro Python 2025.

PS: Thanks to Lias, I will always have a small piece of Euro Python 2024 with me. I know I am loved and cared for.

by Anwesha Das at July 17, 2024 11:42 AM

Euro Python 2024

It is July, and it is time for Euro Python, and 2024 is my first Euro Python. Some busy days are on the way. Like every other conference, I have my diary, and the conference days are full of various activities.

euro_travel_0.jpeg

Day 0 of the main conference

After a long time, I will give a legal talk. We are going to dig into some basics of Intellectual Property. What is it? Why do we need it? What are the different kinds of intellectual property? It is a legal talk designed for developers. So, anyone and everyone from the community with previous knowledge can understand the content and use it to understand their fundamental rights and duties as developers.Intellectual Property 101, the talk is scheduled at 11:35 hrs.

Day 1 of the main conference

Day 1 is PyLadies Day, a day dedicated to PyLadies. We have crafted the day with several different kinds of events. The day opens with a self-defense workshop at 10:30 hrs. PyLadies, throughout the world, aims to provide and foster a safe space for women and friends in the Python Community. This workshop is an extension of that goal. We will learn how to deal with challenging, inappropriate behavior.
In the community, at work, or in any social space. We will have a trained Psychologist as a session guide to help us. This workshop is so important, especially today as it was yesterday and may be in the future (at least until the enforcement of CoC is clear). I am so looking forward to the workshop. Thank you, Mia, Lias and all the PyLadies for organizing this and giving shape to my long-cherished dream.

Then we have my favorite part of the conference, PyLadies Lunch. I crafted the afternoon with a little introduction session, shout-out session, food, fun, laughter, and friends.

After the PyLadies Lunch, I have my only non-PyLadies session, which is a panel discussion on Open Source Sustainability. We will discuss the different aspects of sustainability in the open source space and community.

Again, it is PyLady&aposs time. Here, we have two sessions.

[IAmRemarkable](https://ep2024.europython.eu/pyladies-events#iamremarkable), to help you learn to empower you by celebrating your achievements and to fight your impostor syndrome. The workshop will help you celebrate your accomplishments and improve your self-promotion skills.

The second session is a 1:1 mentoring event, Meet & Greet with PyLadies. Here, the willing PyLadies will be able to mentor and be mentored. They can be coached in different subjects, starting with programming, learning, things related to job and/or career, etc.

Birds of feather session on Release Management of Open Source projects

It is an open discussion related to the release Management of the Open Source ecosystem.
The discussion includes everything from a community-led project to projects maintained/initiated by a big enterprise, a project maintained by one contributor to a project with several hundreds of contributor bases. What are the different methods we follow regarding versioning, release cadence, and the process itself? Do most of us follow manual processes or depend on automated ones? What works and what does not, and how can we improve our lives? What are the significant points that make the difference? We will discuss and cover the following topics: release management of open source projects, security, automation, CI usage, and documentation. In the discussion, I will share my release automation journey with Ansible. We will follow Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

So, here comes the days of code, collaboration, and community. See you all there.

PS: I miss my little Py-Lady volunteering at the booth.

by Anwesha Das at July 08, 2024 09:56 AM

Event Driven Ansible, what, why and how?

Ansible Playbooks is the known term, now there is a new term which is being floted in the project, which is Ansible Rulebooks. Today we are going to discuss about Ansible&aposs journey from Playbook to Rulebook rather Playbook with Rulebook.

What is Event Driven Ansible?

What is Event Driven Ansible? In simple terms, some action is triggered by some events. The idea of EDA comes from Event driven architecture. Event driven ansible runs code automatically based on received event notifications.

Some important terms:

What is event in Event Driven Ansible?

The event is the notification of a certain incident.

Where do we get the events from?

We get the events from event sources. Ansible EDA provides different pulgins to support various event sources. There are several event source plugins such as :
url_check (checking the http status code), webhook (providing and checking events from webhook), journald (monitoring the journald logs) and the list goes on.

When to take actions?

Rulebook defines conditions and actions in case of fulfilling those actions. Conditions use operators as strings, boolean and numerical data. And actions are occurrence of events once the conditions are met. Running a playbook, setting a fact, running a module etc.

Small example Project

Here is a small example of Event Driven Ansible and how it is run. The idea is on receiving of a message (here the number 42) a playbook will run in the host. There are the following 3 files :

demo_rule.yml

---
- name: Listen for events on a webhook
  hosts: all

  sources:
    - ansible.eda.webhook:
        host: 0.0.0.0
        port: 8000

  rules:
    - name: Say thank you
      condition: event.payload.message == "42"
      action:
        run_playbook:
          name: demo.yml

This is the rulebook. We are using the webhook plugin here as the event source. As a rule in the event of receiving the message 42 as json payload in the webhook, we run the playbook called demo.yml

demo.yml

- hosts: localhost
  connection: local
  tasks:
    - debug:
        msg: "Thank you for the answer."

demo.yml, the playbook which run on the occurrence of the event mentioned in the rulebook and prints a debug message.

---
local:
  hosts:
    localhost

inventory.yml mentions the hosts to run the action against.

Further there are 2 files to one to test 42.json and 43.json to test the code.

{
  "message" : "42"
}
{
  "message" : "43"
}

First we have to install all related dependencies before we can run the rulebook.

$ python -m venv .venv
$ source .venv/bin/activate
$ python -m pip install ansible ansible-rulebook ansible-runner psycopg
$ ansible-galaxy collection install ansible.eda
$ ansible-rulebook --rulebook demo_rule.yml -i inventory.yml --verbose

Go to another terminal and on the same directory path and run the following command to test the Rulebook. After receiving the message, the playbook runs.

curl -X POST -H "Content-Type: application/json" -d @42.json 127.0.0.1:8000/endpoint

Output

2024-06-07 16:48:53,868 - ansible_rulebook.app - INFO - Starting sources
2024-06-07 16:48:53,868 - ansible_rulebook.app - INFO - Starting rules

...

TASK [debug] *******************************************************************
ok: [localhost] => {
    "msg": "Thank you for the answer."
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
2024-06-07 16:50:08,224 - ansible_rulebook.action.runner - INFO - Ansible runner Queue task cancelled
2024-06-07 16:50:08,225 - ansible_rulebook.action.run_playbook - INFO - Ansible runner rc: 0, status: successful

Now if we run the other json file 43.json we see that the playbook does not run even after the http status code being 200.

curl -X POST -H "Content-Type: application/json" -d @43.json 127.0.0.1:8000/endpoint

Output :

2024-06-07 18:20:37,633 - aiohttp.access - INFO - 127.0.0.1 [07/Jun/2024:17:20:37 +0100] "POST /endpoint HTTP/1.1" 200 159 "-" "curl/8.2.1"


You can try this yourself follwoing this git repository.

by Anwesha Das at June 07, 2024 06:02 PM

A Tragic Collision: Lessons from the Pune Porsche Accident

I’m writing a blog after a very long time, as I kept procrastinating, but today I decided to write about something important and yes, it is a hot topic in the country right now. In Pune, a 17-year-old boy was driving a Porsche while under the influence of alcohol. As I read in the news, he was speeding, and while speeding, his car hit a two-wheeler vehicle, resulting in the death of two young people who were techies.
June 03, 2024 11:39 AM

Test container image with eercheck

Execution Environments serves us the benefits of containerization by solving the issues such as software dependencies, portability. Ansible Execution Environment are Ansible control nodes packaged as container images. There are two kinds of Ansible execution environments

  • Base, includes the following

    • fedora base image
    • ansible core
    • ansible collections : The following set of collections
      ansible.posix
      ansible.utils
      ansible.windows
  • Minimal, includes the following

    • fedora base image
    • ansible core

I have been the release manager for Ansible Execution Environments. After building the images I perform certain steps of tests to check if the versions of different components of the newly built correct or not. So I wrote eercheck to ease the steps of tests.

What is eercheck?

eercheck is a command line tool to test Ansible community execution environment before release. It uses podman py to connect and work with the podman container image, and Python unittest for testing the containers.

eercheck is a command line tool to test Ansible Community Execution Environment before release. It uses podman-py to connect and work with the podman container image, and Python unittest for testing the containers. The project is licensed under GPL-3.0-or-later.

How to use eercheck?

Activate the virtual environment in the working directory.

python3 -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements.txt

Activate the podman socket.

systemctl start podman.socket --user

Update vars.json with correct version numbers.Pick the correct versions of the Ansible Collections from the .deps file of the corresponding Ansible community package release. For example for 9.4.0 the Collection versions can be found in here. You can find the appropriate version of Ansible Community Package here. The check needs to be carried out each time before the release of the Ansible Community Execution Environment.

Execute the program by giving the correct container image id.

./containertest.py image_id

Happy automating.

by Anwesha Das at April 08, 2024 02:25 PM

Making my first OnionShare release

One of the biggest bottlenecks in maintaining the OnionShare desktop application has been packaging and releasing the tool. Since OnionShare is a cross platform tool, we need to ensure that release works in most different desktop Operating Systems. To know more about the pain that goes through in making an OnionShare release, read the blogs[1][2][3] that Micah Lee wrote on this topic.

However, one other big bottleneck in our release process apart from all the technical difficulties is that Micah has always been the one making the releases, and even though the other maintainers are aware of process, we have never actually made a release. Hence, to mitigate that, we decided that I will be making the OnionShare 2.6.1 release.

PS: Since Micah has written pretty detailed blogs with code snippets, I am not going to include much code snippets (unless I made significant changes) to not lengthen this already long code further. I am going to keep this blog more like a narrative of my experience.

Getting the hardwares ready

Firstly, given the threat model of OnionShare, we decided that it is always good to have a clean machine to do the OnionShare release works, especially the signing part of things. Micah has already automated a lot of the release process using GitHub Actions over the years, but we still need to build the Apple Silicon versions of OnionShare manually and then merge with the Intel version to create a univeral2 app bundle.

Also, in general, it's a good practise to have and use the signing keys in a clean machine for a projective as sensitive as OnionShare that is used by people with high threat models. So I decided to get a new Macbook for the same. This would help me build the Apple Silicon version as well as sign the packages for the other Operating Systems.

Also, I received the HARICA signing keys from Glenn Sorrentino that is needed for signing the Windows releases.

Fixing the bugs, merging the PRs

After the 2.6.1-dev release was created, we noticed some bugs that we wanted to fix before making the 2.6.1. We fixed, reviewed and merged most of those bugs. Also, there were few older PRs and documentation changes from contributors that I wanted to be merged before making the release.

Translations

Localization is an important part of OnionShare since it enables users to use OnionShare in the language they are most comfortable with. There were quite some translation PRs. Also, emmapeel2 who always helps us with weblate wizardry, made certain changes in the setup, which I also wanted to include in this release.

After creating the release PR, I also need to check which languages are greater than 90% translated, and make a push to hopefully making some more languages pass that threshold, and finally make the OnionShare release with only the languages that cross that threshold.

Making the Release PR

And, then I started making the release PR. I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.

Updating dependencies in Snapcraft

Updating the poetry dependencies went pretty smoothly.

There was nothing much to update in the pluggable transport scripts as well.

But then I started updating and packaging for Snapcraft and Flatpak. Updating tor versions to the latest went pretty smoothly. In snapcraft, the python dependencies needed to be compared manually with the pyproject.toml. I definitely feel like we should automate this process in future, but for now, it wasn't too bad.

But trying to build snap with snapcraft locally just was not working for me in my system. I kept getting lxd errors that I was not fully sure what to do about. I decided to move ahead with flatpak packaging and wait to discuss the snapcraft issue with Micah later. I was satisfied that at least it was building through GitHub Actions.

Updating dependencies in Flatpak

Even though I read about the hardship that Micah had to go through with updating pluggable transports and python dependencies in flatpak packaging, I didn't learn my lesson. I decided, let's give it a try. I tried updating the pluggable transports and faced the same issue that Micah did. I tried modifying the tool, even manually updating the commits, but something or the other failed.

Then, I moved on to updating the python dependencies for flatpak. The generator code that Micah wrote for desktops worked perfectly, but the cli gave me pain. The format in which the dependencies were getting generated and the existing formats were not matching. And I didn't want to be too brave and change the format, since flatpak isn't my area of expertise. But, python kind of is. So I decided to check if I can update the flatpak-poetry-generator.py files to work. And I managed to fix that!

That helped me update the dependencies in flatpak.

MacOS and Windows Signing fun!

Creating Apple Silicon app bundle

As mentioned before, we still need to create an Apple Silicon bundle and then merge it with the Intel build generated from CI to get the universal2 app bundle. Before doing that, need to install the poetry dependencies, tor dependencies and the pluggable transport dependencies.

And I hit an issue again: our get-tor.py script is not working.

The script failed to verify the Tor Browser version that we were downloading. This has happened before, and I kind of doubted that Tor PGP script must have expired. I tried verifying manually and seems like that was the case. The subkey used for signing had expired. So I downloaded the new Tor Browser Developers signing keys, created a PR, and seems like I could download tor now.

Once that was done, I just needed to run:

/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac
rm -rf build/OnionShare.app/Contents/Resources/lib
mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build

And amazingly, it built successfully in the very first try! That was easy! Now I just need to merge the Intel app bundle and the Silicon app bundle and everything should work (Spoiler alert: It doesn't!).

Once the app bundle was created, it was time to sign and notarize. However, the process was a little difficult for me to do since Micah had previously used an individual account. So I passed on the universal2 bundle to him and moved on to signing work in Windows.

Signing the Windows package

I had to boot into my Windows 11 VM to finish the signing and making the windows release. Since this was the first time I was doing the release, I had to first get my VM ready by installing all the dependencies needed for signing and packaging. I am not super familiar with Windows development environment so had to figure out adding PATH and other such things to make all the dependencies work. The next thing to do was setting up the HARICA smart card.

Setting up the HARICA smart card

Thankfully, Micah had already done this before so he was able to help me out a bit. I had to log into the control panel, download and import certificates to my smart card and change the token password and administrator password for my smart card. Apart from the UI of the SafeNet client not being the best, everything else went mostly smoothly.

Since Micah had already made some changes to fix the code signing and packaging stuff, it went pretty smooth for me and I didn't face much obstructions. Science & Design, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare and hence the package now gets signed under the name of Science and Design Inc.

Meanwhile, Micah had got back to me saying that the universal2 bundle didn't work.

So, the Apple Silicon bundle didn't work

One of the mistakes that I made was I didn't test my Apple Silicon build. I thought I will test it once it is signed and notarized. However, Micah confirmed that even after signing and notarizing, the universal2 build is not working. It kept giving segmentation fault. Time to get back to debugging.

Downgrading cx-freeze to 6.15.9

The first thought that came to my mind was, Micah had made a dev build in October 2023. So the cx-freeze release from that time should still be building correctly. So I decided to try and do build (instead of bdist_mac) with the cx-freeze version at that time (which was 6.15.9) and check if the binary created works. And thankfully, that did work. I tried with 6.15.10 and it didn't. So I decided to stick to 6.15.9.

So let's try now running bdist_mac, create a .app bundle and hopefully everything will work perfectly! But nope! The command failed with:

OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory

So now I had a decision to make, should I try to monkey-patch this and just figure out how to fix this or try to make the latest cx-freeze work. I decided to give the latest cx-freeze (version 6.15.15) another try.

Trying zip_include_packages

So, one thing I noticed we were doing differently than what cx-freeze documentation and examples for PySide6 mentioned was we put our dependencies in packages, instead of zip_include_packages in the setup options.

    "build_exe": {
        "packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I thought, let's try moving all of the depencies into zip_include_packages from packages. Basically zip_include_packages includes the dependencies in the zip file, whereas packages place them in the file system and not the zip file. My guess was, the Apple Silicon configuration of how a .app bundle should be structured has changed. So the new options looked something like this:

    "build_exe": {
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I created a build using that, ran the binary, and it gave an error. But I was happy, because it wasn't segmentation fault. The error mainly because it was not able to import some functions from onionshare_cli. So as a next step, I decided to move everything apart from onionshare and onionshare_cli to zip_include_packages. It looked something like this:

    "build_exe": {
        "packages": [
            "onionshare",
            "onionshare_cli",
        ],
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

This almost worked. Problem was, PySide 6.4 had changed how they deal with ENUMs and we were still using deprecated code. Now, fixing the deprecations would take a lot of time, so I decided to create an issue for the same and decided to deal with it after the release.

At this point, I was pretty frustrated, so I decided to do, what I didn't want to do. Just have both packages and zip_include_packages. So I did that, build the binary and it worked. I decided to make the .app bundle. It worked perfectly as well! Great!

I was a little worried that adding the dependencies in both packages and zip_include_packages might increase the size of the bundle, but surprisingle, it actually decreased the size compared to the dev build. So that's nice! I also realized that I don't need to replace the lib directory inside the .app bundle anymore. I ran the cleanup code, hit some FileNotFoundError, tried to find if the files were now in a different location, couldn't find them, decided to put them in a try-except block.

After that, I merged the silicon bundle with Intel bundle to create the universal2 bundle again, sent to Micah for signing, and seems like everything worked!

Creating PGP signature for all the builds

Now that we had all the build files ready, I tried installing and running them all, and seems like everything is working fine. Next, I needed to generate PGP signature for each of the build files and then create a GitHub release. However, Micah is the one who has always created signatures. So the options for us now were:

  • create an OnionShare GPG key that everyone uses
  • sign with my GPG and update documentations to reflect the same

The issue with creating a new OnionShare GPG key was distribution. The maintainers of OnionShare are spread across timezones and continents. So we decided to create signature with my GPG and update the documentation on how to verify the downloads.

Concluding the release

Once the signatures were done, the next steps were mostly straightforward:

  • Create a GitHub release
  • Publish onionshare-cli on PyPi
  • Push the build and signatures to the onionshare.org servers and update the website and docs
  • Create PRs in Flathub and Homebrew cask
  • Make the snapcraft edge to stable

The above went pretty smooth without much difficulty. Once everything was merged, it was time to make an announcement. Since Micah has been doing the announcements, we decided to stick with that for the release so that it reaches to more people.

February 29, 2024 12:41 PM

[2024] Hope - Dailies

This is accompanied with Hope

This is to maintain a daily journal of the efforts. This is to be seen from bottom to top.

To explain the jargons:

  • cw: current weight
  • gw: goal weight

Ok, lets start.

February 12, 2024

- cw: 82.6 kgs

Set on routine, I went to workout, and worked on functional movements with Yugal. I was able to do 5 pushups, and 20 knee pushups. Slowly building up the energy. Sleep was also proper, hit a 80+ mark, and was well rested. Onto the second week of tracking.

Food was still on track. I need to eat food on time. Also need to get a good diet plan and track macros.

Habit Tracking

- Food Habits: 2.5/5
- Water (320ml cup): 3.5L
- Exercise: 3/5
- Sleep Habit: 3.5/5

February 11, 2024

- cw: 83.0 kgs

February 10, 2024

- cw: 83.0 kgs

February 09, 2024

- cw: 83.5 kgs

February 08, 2024

- cw: 83.6 kgs

February 07, 2024

- cw: 83.7 kgs

A good day, I went to the gym in the morning, followed by a good meal. I still need to work on my sleep but overall I would rate the day good. I also seem to enter the weight from where the challenge would begin. Good that I set the initial goal weight to be 81kgs.

Here is the workout log:

Glute Bridges: 1 min x 2
Plank Hold: 1 min x 2
Tricep Back dips: 15 x 3
Bicep curls: 5kg x 15 x 3
Walking Lunges: 5kg x 10 x 3
Kettlebell Squats: 10kg x 10 x 4
Skips: a couple, learning the form from Jackson
Dumbell Bench Press: 5kg x 10 x 3 (again learning the form)

In terms of food, I did have a late breakfast after the workout, followed by a late lunch, which did include salad - but I made sure I had a early dinner.

Habit Tracking

- Food Habits: 3.5/5
- Water (320ml cup): 3L
- Exercise: 3/5
- Sleep Habit: 2/5

February 06, 2024

- cw: 83.9 kgs

An okay day of focus, too much out of pure motivation rather than will. I drank good enough water, but I need to drink more. Afternoon went for a workout session, which was moderate but scope of improvement. Food habits okay, as still trying to get into the regime.

Habit Tracking

- Food Habits: 3/5
- Water (320ml cup): 8
- Exercise: 2/5
- Sleep Habit: 3/5

February 05, 2024

- cw: 84.8 kgs

Comparatively, it’s been a good day. It’s been pure motivation at the moment, need to turn it around in consistency. I did not get to walk, but I’ve just started to track everything. Went to meet friends in the evening that rocked the schedule a bit.

Habit Tracking

- Food Habits: 3/5
- Water (320ml cup): 6
- Exercise: None
- Sleep Habit: 3/5

February 04, 2024

- cw: 85.9 kgs

Hello 2024!

2023 surely hasn’t been a good year in terms of staying healthy. I’ve been eating a lot, and that shows. I’ve gained 15 kgs in a just a span of a year, and I don’t feel well. I’ve a trek towards the end of the year and I need to start working out and feel stronger. Be light and swift!

February 05, 2024 12:00 AM

2024 - Hope

A thread to lose weight, and see a healthier version of me, v2024.6?. This is inspired from closely following Priyanka and Jason

16 weeks until June, Let’s see how I fare. I started a similar challenge in 2021, but motivations then and now are quite different.

To explain the jargons:

  • hw: highest weight
  • sw: starting weight
  • cw: current weight
  • gw: goal weight

The log will be updated weekly with the latest first. Incase you want to start from the beginning, jump here

For daily updates, check the daily fitness log

Ok, lets start.

- hw: 85.9 kgs (05/02/2024)
- sw: 85.9 kgs (05/02/2024)

February 12, 2024

- cw: 83.9 kgs (12/02/2024)

- gw0: 81 kgs
- gw1: 78 kgs
- gw2: 75 kgs
- gw3: 73 kgs
- gw4: 70 kgs

Good first week of tracking, I tried to eat healthy, workout and move. I failed bunch of times but I kept moved ahead and looking to the next day. Starting few days the drop was quite huge. Maybe water weight from the past days, but then it stabilized at 83.7 kgs, so all the drop of after that is what I lost. First, goal weight is still a bit far hopefully in next 2-3 weeks.

February 05, 2024

- cw: 85.9 kgs (05/02/2024)

- gw0: 81 kgs
- gw1: 78 kgs
- gw2: 75 kgs
- gw3: 73 kgs
- gw4: 70 kgs

Hello 2024!

2023 surely hasn’t been a good year in terms of staying healthy. I’ve been eating a lot, and that shows. I’ve gained 15 kgs in a just a span of a year, and I don’t feel well. I’ve a trek towards the end of the year and I need to start working out and feel stronger. Be light and swift!

February 05, 2024 12:00 AM

New Blog

New Blog

This is beginning of my new blog! While https://blog.araj.me was previously running on Ghost as well, this is a new install. Primarily, because i couldn&apost easily get the data back from my previous ghost install. It still lives in a mysql instance, so old posts might appear on this instance too if I feel like it at some point.

What am I going to write about? I&aposve been working a lot on my homelab setup so that is probably going to be the starting point. I have also been trying out OpenWRT for my router (running on an Edgerouter X, who could&aposve thought it can run with 95% space available and over 65% free memory) and struggling to re-configure VLANs to segregate my homelab, "regular internet" for my wife and guests and IoT stuff. Setting up VLANs on OpenWRT was not fun, I took down the internet a couple of times, which wasn&apost appreciated at home. So, I ended up flashing another old TP-Link router I had to learn OpenWRT so I try out settings there before I apply it to main router.

My Homelab currently runs on an Intel NUC 10 i7 (6C12T, 16G RAM), which has been plenty for my current use cases. I&aposve over-provisioned it with Proxmox VE as the hypervisor of choice. I am using an actual hypervisor based setup for the first time and there is no going back now! For some reason, I tried out Xcp-ng as well but with XOA, I couldn&apost figure out how to do some stuff, so that setup is currently turned off. Maybe I&aposll dust it off again at some point. I do have 2 more nodes in the standby to run more things, but that&aposll probably happen once I shift to my new house (hopefully soon!).

by Abhilash Raj at January 10, 2024 05:44 PM

Safeguarding Our Digital Lives: As Prevention is Better than the Cure

Today, I stumbled upon some deeply concerning news regarding the unauthorized leak of private pictures belonging to a 16-year-old girl from her online account. This incident serves as a stark reminder of the risks we face in the digital world. We must exercise caution and thoughtfulness when sharing anything online, as once something is uploaded, it can be extremely challenging and almost impossible to completely remove it. Almost all of us know the trouble we have to go to remove our own pictures from fake profiles from social media and their customer support is nearly non-existent.
July 10, 2023 12:00 AM

Subscriptions

Planetorium