Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

Observations as I Learn to Read, 2

I finally got done transferring all my highlights and thoughts and notes from Adler into a notes file on the computer. This was my first, really large notes file, filled with heirarchies of headlines and tags and Org Mode handled it like a champ!

It took me five times as long to transfer and organise the file, as it took me to read the book. This was very painful!
I am writing this down, so that my future self remembers!
Very, very, painful!

So I need to be intentional about books that I need to analytically read and change my reading habits.

  1. Have a one to one ratio for notes vs reading.
    This means for every ten minutes I want to read, I need to allocate at least the same amount of time for bringing my notes in.
  2. All of the above, has to be a single session.
    So reading, followed by a bit of reflection, followed by transferring my notes and annotations out to Org Roam. So my old hour of reading is now 30m of reading, followed by 30m of thinking and jotting my thoughts down.
  3. Cross my fingers, and hope that it’ll all work out. Hope that something sane, something intelligible emerges in the end.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


July 08, 2025 12:25 PM

Creating Pull request with GitHub Action

---
name: Testing Gha
on:
  workflow_dispatch:
    inputs:
      GIT_BRANCH:
        description: The git branch to be worked on
        required: true

jobs:
  test-pr-creation:
    name: Creates test PR
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
      contents: write
    env:
      GIT_BRANCH: ${{ inputs.GIT_BRANCH }}
    steps:
      - uses: actions/checkout@v4
      - name: Updates README
        run: echo date >> README.md

      - name: Set up git
        run: |
          git switch --create "${GIT_BRANCH}"
          ACTOR_NAME="$(curl -s https://api.github.com/users/"${GITHUB_ACTOR}" | jq --raw-output &apos.name // .login&apos)"
          git config --global user.name "${ACTOR_NAME}"
          git config --global user.email "${GITHUB_ACTOR_ID}+${GITHUB_ACTOR}@users.noreply.github.com"

      - name: Add README
        run: git add README.md

      - name: Commit
        run: >-
          git diff-index --quiet HEAD ||
          git commit -m "test commit msg"
      - name: Push to the repo
        run: git push origin "${GIT_BRANCH}"

      - name: Create PR as draft
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >-
          gh pr create
          --draft
          --base main
          --head "${GIT_BRANCH}"
          --title "test commit msg"
          --body "pr body"

      - name: Retrieve the existing PR URL
        id: existing-pr
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >
          echo -n pull_request_url= >> "${GITHUB_OUTPUT}"

          gh pr view
          --json &aposurl&apos
          --jq &apos.url&apos
          --repo &apos${{ github.repository }}&apos
          &apos${{ env.GIT_BRANCH }}&apos
          >> "${GITHUB_OUTPUT}"
      - name: Select the actual PR URL
        id: pr
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >
          echo -n pull_request_url=
          >> "${GITHUB_OUTPUT}"

          echo &apos${{steps.existing-pr.outputs.pull_request_url}}&apos
          >> "${GITHUB_OUTPUT}"

      - name: Log the pull request details
        run: >-
           echo &aposPR URL: ${{ steps.pr.outputs.pull_request_url }}&apos | tee -a "${GITHUB_STEP_SUMMARY}"


      - name: Instruct the maintainers to trigger CI by undrafting the PR
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >-
            gh pr comment
            --body &aposPlease mark the PR as ready for review to trigger PR checks.&apos
            --repo &apos${{ github.repository }}&apos
            &apos${{ steps.pr.outputs.pull_request_url }}&apos

The above is an example of how to create a draft PR via GitHub Actions. We need to give permissions to the GitHub action to create PR in a repository (workflow permissions in the settings).

workflow_permissions.png

Hopefully, this blogpost will help my future self.

by Anwesha Das at July 06, 2025 06:22 PM

Emacs Package Updation Checklist


I’ve never updated my Emacs packages until recently, because Emacs is where all my writing happens, and so I’m justifiably paranoid.
But then some packages stopped working, due to various circumstances1 and an update solved it.

So I’ve decided to update my packages once a quarter, so that I don’t lose days yak shaving when something goes wrong and I handle breakage on my terms and not the machine’s.

As far as package management goes, I want to keep things simple.
In fact, I still haven’t graduated to use-package or straight.el because my package needs are few and conservative2. And so, while there are automatic update options out there, I’ll just stick to updating them manually, every quarter.

Ergo, this is the checklist I’ll use next time onwards …

  1. Stop emacs user service, systemctl --user stop emacs
  2. Backup emacs folder in ~/.config
  3. Start emacs manually (not the service).
  4. M-x package-refresh-contents
  5. M-x package-upgrade-all
  6. Problems? Quit emacs. Revert backup folder.
  7. In the end, start emacs user sevice, systemctl --user start emacs

There’s an Org mode task, scheduled quarterly, so that I won’t forget.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. While I don’t want updated packages, I do want updated Emacs and that broke stuff 😂 ↩︎

  2. The biggest change I forsee, is if Jetbrains ever turn evil and I have to move off their editors and subsequently need to use Emacs as an IDE ↩︎

July 06, 2025 03:06 AM

How to Read a Book: 005, On Reading Speed


image courtesy, Simon & Schuster


This has never been a bugbear for me. A lifetime of reading has led me to read at a fairly fast clip. But Adler puts into specific words, what it is that I actually, subconsciously do. This will help me advise my younger friends, when I used to struggle earlier, with a “Just keep at it”

Below followeth Adler’s advice …

  1. Great speed in reading is a dubious achievement; it is of value only if what you have to read is not really worth reading. A better formula is this:
    Every book should be read no more slowly than it deserves, and no more quickly than you can read it with satisfaction and comprehension. In any event, the speed at which they read, be it fast or slow, is but a fractional part of most people’s problem with reading
  2. The ideal is not merely to be able to read faster, but to be able to read at different speeds — and to know when the different speeds are appropriate.

So if that is the ideal, how do we go about increasing our speed if we are slow / irregular? Adles has an observation and a suggestion, that’ll take us most of the way there.

  1. The eyes of young or untrained readers “fixate” as many as five or six times in the course of each line that is read. (The eye is blind while it moves; it can only see when it stops.) Thus single words or at the most two-word or three-word phrases are being read at a time, in jumps across the line. Even worse than that, the eyes of incompetent readers regress as often as once every two or three lines—that is, they return to phrases or sentences previously read.
  2. Place your thumb and first two fingers together. Sweep this “pointer” across a line of type, a little faster than it is comfortable for your eyes to move. Force yourself to keep up with your hand. You will very soon be able to read the words as you follow your hand. Keep practicing this, and keep increasing the speed at which your hand moves, and before you know it you will have doubled or trebled your reading speed.

With a caveat however …

  • What exactly have you gained if you increase your reading speed significantly? It is true that you have saved time—but what about comprehension? Has that also increased, or has it suffered in the process?
    It is worth emphasizing, therefore, that it is precisely comprehension in reading that this book seeks to improve. You cannot comprehend a book without reading it analytically; analytical reading, as we have noted, is undertaken primarily for the sake of comprehension (or understanding).


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


July 04, 2025 11:30 PM

Joseph Conrad Forsees Aaron Swartz and the AI Bandits


via PRH NZ


Been listening to Joseph Conrad’s, Heart of Darkness in the little cracks of time in the day1 and as always being led to the sad and inevitable conclusion that we always fail to learn from what came before.

It was as unreal as everything else—as the philanthropic pretence of the whole concern, as their talk, as their government, as their show of work. The only real feeling was a desire to get appointed to a trading-post where ivory was to be had, so that they could earn percentages. They intrigued and slandered and hated each other only on that account—but as to effectually lifting a little finger—oh, no.

By heavens! there is something after all in the world allowing one man to steal a horse while another must not look at a halter. Steal a horse straight out. Very well. He has done it. Perhaps he can ride. But there is a way of looking at a halter that would provoke the most charitable of saints into a kick.

Aaron Swartz had to give his life for his beliefs, yet when the robber barons thieve everything is suddenly alright.
Greed, like love, never dies!


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. after hearing it being heavily referred to, in The Rest is History’s episodes on the rape, pillage and exploitation of the Congo (Episodes 538-541) ↩︎

July 04, 2025 02:47 AM

ChatGPT and Images

I’ve been working on a few side projects and using ChatGPT for ideation and brainstorming around ideas and features for the MVP. As part of this, I needed a logo for my app. Naturally, I turned to AI to help me generate one.

However, I noticed that when generating images, ChatGPT doesn’t always follow the guidelines perfectly. Each time I asked for a new version, it would create a completely different image, which made it difficult to iterate or make small tweaks.

But I found a better way.

Instead of generating a brand new image every time, I first explained my app idea and the name. ChatGPT generated an image I liked.

So I asked ChatGPT to generate the JSON for the image instead. I then manually tweaked the JSON file to adjust things exactly the way I wanted. When I asked ChatGPT to generate the image based on the updated JSON, it finally created the image as per my request — no random changes, just the specific adjustments I needed.

Exploration Phase

SplitX logo

{
  "image": {
    "file_name": "splitX_icon_with_text.png",
    "background_color": "black",
    "elements": [
      {
        "type": "text",
        "content": "SplitX",
        "font_style": "bold",
        "font_color": "white",
        "position": "center",
        "font_size": "large"
      },
      {
        "type": "shape",
        "shape_type": "X",
        "style": "geometric split",
        "colors": [
          {
            "section": "top-left",
            "gradient": ["#FF4E50", "#F9D423"]
          },
          {
            "section": "bottom-left",
            "gradient": ["#F9D423", "#FC913A"]
          },
          {
            "section": "top-right",
            "gradient": ["#24C6DC", "#514A9D"]
          },
          {
            "section": "bottom-right",
            "gradient": ["#514A9D", "#E55D87"]
          }
        ],
        "position": "center behind text",
        "style_notes": "Each quadrant of the X has a distinct gradient, giving a modern and vibrant look. The X is split visually in the middle, aligning with the 'Split' theme."
      }
    ]
  }
}

Final Design

SplitX logo Updated JSON

{
  "image": {
    "file_name": "splitX_icon_with_text.png",
    "background_color": "transparent",
    "elements": [
      {
        "type": "shape",
        "shape_type": "X",
        "style": "geometric split",
        "colors": [
          {
            "section": "top-left",
            "gradient": [
              "#FF4E50",
              "#F9D423"
            ]
          },
          {
            "section": "bottom-left",
            "gradient": [
              "#F9D423",
              "#FC913A"
            ]
          },
          {
            "section": "top-right",
            "gradient": [
              "#24C6DC",
              "#514A9D"
            ]
          },
          {
            "section": "bottom-right",
            "gradient": [
              "#514A9D",
              "#E55D87"
            ]
          }
        ],
        "position": "center ",
        "style_notes": "Each quadrant of the X has a distinct gradient, giving a modern and vibrant look. The X is split visually in the middle, aligning with the 'Split' theme."
      }
    ]
  }
}

If you want to tweak or refine an image, first generate the JSON, make your changes there, and then ask ChatGPT to generate the image using your updated JSON. This gives you much more control over the final result.

Cheers!

P.S. Feel free to check out the app — it's live now at https://splitx.org/. Would love to hear what you think!

July 03, 2025 01:28 PM

Observations as I Learn to Read, 1

This is a smol log as I try to reason out what is bothering me about getting the most out of the books I want to, uh, get the most out of.
Something is rankling me, but I don’t know what.

As I go through How to Read a Book, I’ve realised that I need to digest a book in a manner of speaking, specially on my subsequent reads.

My ideal is to pick up a book, and read through it at the speed it ought to take; highlighting, annotating and writing down my thoughts as I go.
And while I do that, also write things down here on the blog, as I’ve been attempting to do with Adler.

Rather, I thought I would do that with Adler.
But through force of habit, I’ve gone and done the same thing I’ve always done.
Read through the entire book, highlighting as I go.
So now I have a book full of notes, and a few thoughts, but all the “brilliant”1 insights that struck me as I read the book have vanished from my mind, as has any tentative mental model of the structure of the book.
And I’m frustrated with this state of affairs.
What happens then, is me having just a page full of notes without any of my thoughts in there, or trying to recapture lighting in a bottle over months of agonising effort.
Neither appeals to me.

The only things I can think of doing right now are:

  1. Keep a notebook beside me and write my thoughts down. The flyleaves are barely enough for me to keep track of running highlights across pages.
  2. Intentionally slow down, even more. I don’t know what this will do to my flow.
  3. Read in chunks. And then stop and write about them. I’ll probably miss the big picture. But I’ll arrive at it slowly, eventually. And the writing will reflect that. Is that good or bad? I don’t really know.

So I think I’m going to buckle down and finish getting my notes from Adler into posts in the coming weeks and try the above on my next book.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. very debatable 😂 ↩︎

July 03, 2025 01:03 PM

How I understood the importance of FOSS i.e Free and Open Source Software

Hello people of the world wide web.
I'm Titas, a CS freshman trying to learn programming and build some cool stuff. Here's how I understood the importance of open source.

The first time I heard of open source was about 3 years ago in a YouTube video, but I didn't think much of it.
Read about it more and more on Reddit and in articles.

Fast forward to after high school — I'd failed JEE and had no chance of getting into a top engineering college. So I started looking at other options, found a degree and said to myself:
Okay, I can go here. I already know some Java and writing code is kinda fun (I only knew basics and had built a small game copying every keystroke of a YouTube tutorial).
So I thought I could learn programming, get a job, and make enough to pay my bills and have fun building stuff.

Then I tried to find out what I should learn and do.
Being a fool, I didn't look at articles or blog posts — I went to Indian YouTube channels.
And there was the usual advice: Do DSA & Algorithms, learn Web Development, and get into FAANG.

I personally never had the burning desire to work for lizard man, but the big thumbnails with “200k”, “300k”, “50 LPA” pulled me in.
I must’ve watched 100+ videos like that.
Found good creators too like Theo, Primeagen, etc.

So I decided I'm going to learn DSA.
First, I needed to polish my Java skills again.
Pulled out my old notebook and some YT tutorials, revised stuff, and started learning DSA.

It was very hard.
Leetcode problems weren't easy — I was sitting for hours just to solve a single problem.
3 months passed by — I learnt arrays, strings, linked lists, searching, sorting till now.
But solving Leetcode problems wasn't entertaining or fun.
I used to think — why should I solve these abstract problems if I want to work in FAANG (which I don't even know if I want)?

Then I thought — let's learn some development.
Procrastinated on learning DSA, and picked up web dev — because the internet said so.
Learnt HTML and CSS in about 2-3 weeks through tutorials, FreeCodeCamp, and some practice.

Started learning JavaScript — it's great.
Could see my output in the browser instantly.
Much easier than C ,which is in my college curriculum (though I had fun writing C).

Started exploring more about open source on YouTube and Reddit.
Watched long podcasts to understand what it's all about.
Learnt about OSS — what it is, about Stallman, GNU, FOSS.
OSS felt like an amazing idea — people building software and letting others use it for free because they feel like it.
The community aspect of it.
Understood why it's stupid to have everything under control of a capitalist company — who can just one day decide to stop letting you use your own software that you paid for.

Now I’m 7 months into college, already done with sem 1, scored decent marks.
I enjoy writing code but haven't done anything substantial.
So I thought to ask for some help. But who to ask?

I remembered a I've heard about this distant cousin Kushal who lives in Europe and has done some great software and my mother mentioned him like he was some kind of a genius .I once had a brief conversation with him via text regarding if I should take admission in BCA than an engineering degree, and his advice gave me some motivation and positivity . He said:

“BCA or Btech will for sure gets a job faster Than tradional studying If you can put in hours, that is way more important than IQ.
I have very average IQ but I just contributed to many projects.”

So 7 months later, I decided to text him again — and surprisingly, he replied and agreed to talk with me on a call.
Spoke with him for 45 odd minutes and asked a bunch of questions about software engineering, his work, OSS, etc.

Had much better clarity after talking with him.
He gave me the dgplug summer training docs and a Linux book he wrote.

So I started reading the training docs.

  • Step 0: Install a Linux distro → already have it ✅
  • Step 1: Learn touch typing → already know it ✅

Kept reading the training docs.
Read a few blog posts on the history of open source — already knew most of the stuff but learnt some key details.

Read a post by Anwesha on her experience with hacking culture and OSS as a lawyer turned software engineer — found it very intriguing.

Then watched the documentaries Internet's Own Boy and Coded Bias.
Learnt much more about Aaron Swartz than I knew — I only knew he co-founded Reddit and unalived himself after getting caught trying to open-source the MIT archives.

Now I had a deeper understanding of OSS and the culture.
But I had a big question about RMS — why was he so fixated on the freedom to hack and change stuff in the software he owned?
(Yes, the Free in FOSS doesn’t stand for free of cost — it stands for freedom.)

I thought free of cost makes sense — but why should someone have the right to make changes in a paid software?
Couldn't figure it out.
Focused on JS again — also, end-semester exams were coming.
My university has 3 sets of internal exams before the end-semester written exams. Got busy with that.

Kept writing some JS in my spare time.
Then during my exams...

It was 3:37 am, 5 June. I had my Statistics exam that morning.
I was done with studying, so I was procrastinating — watching random YouTube videos.
Then this video caught my attention:
How John Deere Steals Farmers of $4 Billion a Year

It went deep into how John Deere installs software into their tractors to stop farmers and mechanics from repairing their own machines.
Only authorized John Deere personnel with special software could do repairs.
Farmers were forced to pay extra, wait longer, and weren’t allowed to fix their own property.

Turns out, you don’t actually buy the tractor — you buy a subscription to use it.
Even BMW, GM, etc. make it nearly impossible to repair their cars.
You need proprietary software just to do an oil change.

Car makers won’t sell the software to these business owners, BUT they'll offer 7500$/year subscriptions to use their software. One auto shop owner explained how he has to pay $50,000/year in subscriptions just to keep his business running.

These monopolies are killing small businesses.

It’s not just India — billion-dollar companies everywhere are hell-bent on controlling everything.
They want us peasants to rent every basic necessity — to control us.

And that night, at 4:15 AM, I understood:

OSS is not just about convenience.
It’s not just for watching movies with better audio or downloading free pictures for my college projects.
It’s a political movement — against control.
It’s about the right to exist, and the freedom to speak, share, and repair.


That's about it. I'm not a great writer — it's my first blog post.

Next steps?
Learn to navigate IRC.
Get better at writing backends in Node.js.
And I'll keep writing my opinions, experiences, and learnings — with progressively better English.

print("titas signing out , post '0'!")
June 23, 2025 07:55 AM

go workspaces usage in kubernetes project

Kubernetes (k/k) recently started using Go Workspaces to manage its multi-module setup more effectively.

The goal is to helps keep things consistent across the many staging repositories and the main k/k project.

At the root of the k/k repo, you’ll find go.work and go.work.sum.

These files are auto-generated by the hack/update-vendor.sh script.

Specifically, the logic lives around lines 198–219 and 293–303.

At a high level, the script runs the following commands from the root of the repo:

go work init
go work edit -go 1.24.0 -godebug default=go1.24
go work edit -use .

git ls-files -z ':(glob)./staging/src/k8s.io/*/go.mod' \
  | xargs -0 -n1 dirname -z \
  | xargs -0 -n1 go work edit -use

go mod download
go work vendor

This creates:

go 1.24.0

godebug default=go1.24

use (
    .
    ./staging/src/k8s.io/api
    ./staging/src/k8s.io/apiextensions-apiserver
    ./staging/src/k8s.io/apimachinery
    ...
    ./staging/src/k8s.io/sample-controller
)
  • go.work.sum file that tracks checksums for the workspace modules — like go.sum, but at the workspace level.

  • and the vendor/ directory which is populated via go work vendor and collects all dependencies from the workspace modules.

June 19, 2025 12:00 AM

OpenSSL legacy and JDK 21

openssl logo

While updating the Edusign validator to a newer version, I had to build the image with JDK 21 (which is there in Debian Sid). And while the application starts, it fails to read the TLS keystore file with a specific error:

... 13 common frames omitted
Caused by: java.lang.IllegalStateException: Could not load store from '/tmp/demo.edusign.sunet.se.p12'
at org.springframework.boot.ssl.jks.JksSslStoreBundle.loadKeyStore(JksSslStoreBundle.java:140) ~[spring-boot-3.4.4.jar!/:3.4.4]
at org.springframework.boot.ssl.jks.JksSslStoreBundle.createKeyStore(JksSslStoreBundle.java:107) ~[spring-boot-3.4.4.jar!/:3.4.4]
... 25 common frames omitted
Caused by: java.io.IOException: keystore password was incorrect
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2097) ~[na:na]
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:228) ~[na:na]
at java.base/java.security.KeyStore.load(KeyStore.java:1500) ~[na:na]
at org.springframework.boot.ssl.jks.JksSslStoreBundle.loadKeyStore(JksSslStoreBundle.java:136) ~[spring-boot-3.4.4.jar!/:3.4.4]
... 26 common frames omitted
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
... 30 common frames omitted

I understood that somehow it is not being able to read file due to bad passphrase. But, the same file with same passphrase can be opened by the older version of the application (in the older containers).

After spending too many hours reading, I finally found the trouble. The openssl was using too new algorithm. By default it will use AES_256_CBC for encryption and PBKDF2 for key derivation. But, if we pass -legacy to the openssl pkcs12 -export command, then it using RC2_CBC or 3DES_CBC for certificate encryption depening if RC2 cipher is enabled.

This finally solved the issue and the container started cleanly.

June 04, 2025 02:06 PM

PyCon Lithuania, 2025

Each year, I try to experience a new PyCon. 2025, PyCon Lithuania was added to my PyCon calendar.

pyon_lt_6.jpg

Day before the conference

What makes this PyCon, is that we were traveling there as a family and the conference days coincided with the Easter holidays. We utilized that to explore the city—the ancient cathedrals, palaces, old cafes, and of course the Lithuanian cuisine. Šaltibarščiai, Balandeliai and Cepelinai.

Tuesday

22nd, the day before the conference was all about practicing the talk and meeting with the community. We had the pre-conference mingling session with the speakers and volunteers. It was time to meet some old and many new people. Then it was time for PyLadies. Inga from PyLadies Lithuania, Nina from Pyladies London and I had a lovely dinner discussion—good food with the PyLadies community,technology, and us.

pyon_lt_2.jpg

Wednesday

The morning started early for us on the day of the conference. All the 3 of us had different responsibilities during the conference. While Py was volunteering, I talked and Kushal was the morning keynoter A Python family in a true sense :)

pyon_lt_1.jpg

I had my talk, “Using PyPI Trusted Publishing to Ansible Release” scheduled for the afternoon session. The talk was about automating the Ansible Community package release process with GitHub action using the trusted publisher in PyPI. The talk described - what is trusted publishing.I explanined the need for it and the usage of trusted publishing. I explained the Ansible manual release process in a nutshell and then moved to what the Ansible release process is now with  GitHub actions and Trusted Publishing. Then the most important part is, the lessons learned in the process and how other open-source communities can get help and benefit from it.Here is the link for the slides of my talk I had questions regarding trusted publishing, experience as a release manager, and of course Ansible.

pyon_lt_0.jpeg

It was the time to bid goodbye to PyCon Lt and come back home. See you next year. Congratulatios organizers for doing a great job in organizing the coference.

pyon_lt_4.jpg

by Anwesha Das at April 30, 2025 10:49 AM

go_modules: how to create go vendor tarballs from subdirectories

The go_modules OBS service is used to download, verify, and vendor Go module dependency sources.

As described in the source project’s (obs-service-go_modules) README:

Using the go.mod and go.sum files present in a Go application, obs-service-go_modules will call Go tools in sequence:

  • go mod download
  • go mod verify
  • go mod vendor

obs-service-go_modules then creates a vendor.tar.gz archive (or another supported compression format) containing the vendor/ directory generated by go mod vendor.
This archive is produced in the RPM package directory and can be committed to OBS to support offline Go application builds for openSUSE, SUSE, and various other distributions.

The README also provides a few usage examples for packagers.

However, it wasn’t immediately clear how to use the go_modules OBS service to create multiple vendor tarballs from different subdirectories within a single Git source repository.

Below is an example where I create multiple vendor tarballs from a single Git repo— (in this case, the etcd project):

<services>

  <!-- Service #1 -->
  <service name="obs_scm">
    <param name="url">https://github.com/etcd/etcd.git</param>
    <param name="scm">git</param>
    <param name="package-meta">yes</param>
    <param name="versionformat">@PARENT_TAG@</param>
    <param name="versionrewrite-pattern">v(.*)</param>
    <param name="revision">v3.5.21</param>
    <param name="without-version">yes</param>
  </service>

  <!-- Service #2 -->
  <service name="go_modules">
    <param name="archive">*etcd.obscpio</param>
  </service>

  <!-- Service #3 -->
  <service name="go_modules">
    <param name="archive">*etcd.obscpio</param>
    <param name="subdir">server</param>
    <param name="vendorname">vendor-server</param>
  </service>

  <!-- Service #4 -->
  <service name="go_modules">
    <param name="archive">*etcd.obscpio</param>
    <param name="subdir">etcdctl</param>
    <param name="vendorname">vendor-etcdctl</param>
  </service>

</services>

image

The above _service file defines four services:

  • Service 1 clones the GitHub repo github.com/etcd/etcd.git into the build root. The resulting output is a cpio archive blob—etcd.cpio.

  • Service 2 locates the etcd.cpio archive, extracts it, runs go mod download, go mod verify, and go mod vendor from the repo root, and creates the default vendor.tar.gz.

  • Service 3 and Service 4 work the same as Service 2, with one difference: they run the Go module commands from subdirectories:

    • Service 3 changes into the server/ directory before running the Go commands, producing a tarball named vendor-server.tar.gz.
    • Service 4 does the same for the etcdctl/ directory, producing vendor-etcdctl.tar.gz.

🔍 Note the subdir and vendorname parameters. These are the key to generating multiple vendor tarballs from various subdirectories, with custom names.


I found the full list of parameters accepted by the go_modules service defined here1:

...
parser.add_argument("--strategy", default="vendor")
parser.add_argument("--archive")
parser.add_argument("--outdir")
parser.add_argument("--compression", default=DEFAULT_COMPRESSION)
parser.add_argument("--basename")
parser.add_argument("--vendorname", default=DEFAULT_VENDOR_STEM)
parser.add_argument("--subdir")
...

The default values are defined here2:

DEFAULT_COMPRESSION = "gz"
DEFAULT_VENDOR_STEM = "vendor"

Also, while writing this post, I discovered that the final vendor tarball can be compressed in one of the following supported formats3:

.tar.bz2
.tar.gz
.tar.lz
.tar.xz
.tar.zst

And finally, here’s the list of supported source archive formats (the blob from which the vendor tarball is created), powered by the libarchive Python module4:

READ_FORMATS = set((
    '7zip', 'all', 'ar', 'cab', 'cpio', 'empty', 'iso9660', 'lha', 'mtree',
    'rar', 'raw', 'tar', 'xar', 'zip', 'warc'
))

  1. https://github.com/openSUSE/obs-service-go_modules/blob/a9bf055557cf024478744fbd7e8621fd03cb2e87/go_modules#L227-L233 

  2. https://github.com/openSUSE/obs-service-go_modules/blob/a9bf055557cf024478744fbd7e8621fd03cb2e87/go_modules#L46C1-L47C31 

  3. https://github.com/openSUSE/obs-service-go_modules/blob/a9bf055557cf024478744fbd7e8621fd03cb2e87/go_modules#L119-L124 

  4. https://github.com/Changaco/python-libarchive-c/blob/1a5b505ab1818686c488b4904445133bcc86fb4d/libarchive/ffi.py#L243-L246 

April 10, 2025 12:00 AM

Blog Questions Challenge 2025

1. Why did you make the blog in the first place?

This blog initially started as part of the summer training by DGPLUG, where the good folks emphasize the importance of blogging and encourage everyone to write—about anything! That motivation got me into the habit, and I’ve been blogging on and off ever since.

2. What platform are you using to manage your blog and why did you choose it?

I primarily write on WriteFreely, hosted by Kushal, who was kind enough to host an instance. I also occasionally write on my WordPress blog. So yeah, I have two blogs.

3. Have you blogged on other platforms before?

I started with WordPress because it was a simple and fast way to get started. Even now, I sometimes post there, but most of my recent posts have moved to the WriteFreely instance.

4. How do you write your posts?

I usually just sit down and write everything in one go. Followed by editing part—skimming through it once, making quick changes, and then hitting publish.

5. When do you feel most inspired to write?

Honestly, I don’t wait for inspiration. I write whenever I feel like it—sometimes in a diary, sometimes on my laptop. A few of those thoughts end up as blog posts, while the rest get lost in random notes and files.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

It depends. After reading a few books and articles on writing, I started following a simple process: finish a draft in one sitting, come back to it later for editing, and then publish.

7. Your favorite post on your blog?

Ahh! This blog post on Google Cloud IAM is one I really like because people told me it was well-written! :)

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

Nope! I like it as it is. Keeping it simple for now.

A big thanks to Jason for mentioning me in the challenge!

Cheers!

March 29, 2025 05:32 AM

Access data persisted in Etcd with etcdctl and kubectl

I created the following CRD (Custom Resource Definition) with — kubectl apply -f crd-with-x-validations.yaml:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must be in the form: <plural>.<group>
  name: myapps.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: example.com
  scope: Namespaced
  names:
    # kind is normally the CamelCased singular type. 
    kind: MyApp
    # singular name to be used as an alias on the CLI
    singular: myapp
    # plural name in the URL: /apis/<group>/<version>/<plural>
    plural: myapps
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            x-kubernetes-validations: 
              - rule: "self.minReplicas <= self.maxReplicas"
                messageExpression: "'minReplicas (%d) cannot be larger than maxReplicas (%d)'.format([self.minReplicas, self.maxReplicas])"
            type: object
            properties:
              minReplicas:
                type: integer
              maxReplicas:
                type: integer

I want to check how the above CRD is persisted in Etcd.

I have two ways to do the job:

Option 1:

Use etcdctl to directly verify the persisted data in Etcd.1

My three steps process:

  • Exec inside the etcd pod in the kube-system namespace of your kubernetes cluster — kubectl exec -it -n kube-system etcd-kep-4595-cluster-control-plane -- /bin/sh
  • Create alias — alias e="etcdctl --endpoints 127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt"
  • Access the data — e get --prefix /registry/apiextensions.k8s.io/
sh-5.2# e get --prefix /registry/apiextensions.k8s.io/

/registry/apiextensions.k8s.io/customresourcedefinitions/shirts.stable.example.com
{"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1beta1","metadata":{"name":"shirts.stable.example.com","uid":"09696eb0-d58b-4a21-8820-b2230b13707e","generation":1,"creationTimestamp":"2025-02-21T12:38:19Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"shirts.stable.example.com\"},\"spec\":{\"group\":\"stable.example.com\",\"names\":{\"kind\":\"Shirt\",\"plural\":\"shirts\",\"shortNames\":[\"shrt\"],\"singular\":\"shirt\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"jsonPath\":\".spec.color\",\"name\":\"Fruit\",\"type\":\"string\"}],\"name\":\"v1\",\"schema\":{\"openAPIV3Schema\":{\"properties\":{\"spec\":{\"properties\":{\"color\":{\"type\":\"string\"},\"size\":{\"type\":\"string\"}},\"type\":\"object\"}},\"type\":\"object\"}},\"served\":true,\"storage\":true}]}}\n"},"managedFields":[{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:acceptedNames":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:conditions":{"k:{\"type\":\"Established\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamesAccepted\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:conversion":{".":{},"f:strategy":{}},"f:group":{},"f:names":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:scope":{},"f:versions":{}}}}]},"spec":{"group":"stable.example.com","version":"v1","names":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"scope":"Namespaced","validation":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"versions":[{"name":"v1","served":true,"storage":true}],"additionalPrinterColumns":[{"name":"Fruit","type":"string","JSONPath":".spec.color"}],"conversion":{"strategy":"None"},"preserveUnknownFields":false},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"storedVersions":["v1"]}}

Option 2:

Use kubectl to access the persisted data from Etcd –

kubectl get --raw /apis/apiextensions.k8s.io/v1/customresourcedefinitions/shirts.stable.example.com

> kubectl get --raw /apis/apiextensions.k8s.io/v1/customresourcedefinitions/shirts.stable.example.com

{"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1","metadata":{"name":"shirts.stable.example.com","uid":"09696eb0-d58b-4a21-8820-b2230b13707e","resourceVersion":"594","generation":1,"creationTimestamp":"2025-02-21T12:38:19Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"shirts.stable.example.com\"},\"spec\":{\"group\":\"stable.example.com\",\"names\":{\"kind\":\"Shirt\",\"plural\":\"shirts\",\"shortNames\":[\"shrt\"],\"singular\":\"shirt\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"jsonPath\":\".spec.color\",\"name\":\"Fruit\",\"type\":\"string\"}],\"name\":\"v1\",\"schema\":{\"openAPIV3Schema\":{\"properties\":{\"spec\":{\"properties\":{\"color\":{\"type\":\"string\"},\"size\":{\"type\":\"string\"}},\"type\":\"object\"}},\"type\":\"object\"}},\"served\":true,\"storage\":true}]}}\n"},"managedFields":[{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:acceptedNames":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:conditions":{"k:{\"type\":\"Established\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamesAccepted\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:conversion":{".":{},"f:strategy":{}},"f:group":{},"f:names":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:scope":{},"f:versions":{}}}}]},"spec":{"group":"stable.example.com","names":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"scope":"Namespaced","versions":[{"name":"v1","served":true,"storage":true,"schema":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"additionalPrinterColumns":[{"name":"Fruit","type":"string","jsonPath":".spec.color"}]}],"conversion":{"strategy":"None"}},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"storedVersions":["v1"]}}


  1. I realised while I’m accessing the same CRD data with etcdctl and kubectl, I’m getting a few things different in my output. In case of etcdctl — I get (i) "version":"v1", (ii) the CRD schema is stored in field "validation":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}} and (iii) and there’s a top level additionalPrinterColumns. While in case of kubectl— I don’t get the above bits, and instead I get both, the schema and the additionalPrinterColumns stored in the versions array - "versions":[{"name":"v1","served":true,"storage":true,"schema":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"additionalPrinterColumns":[{"name":"Fruit","type":"string","jsonPath":".spec.color"}]}]. This is (maybe) something to do with how currently (as of writing) Kubernetes stores/persists CRD v1 as v1beta1 in Etcd, because v1 takes more space to represent the same CRD (due to denormalization of fields among multi-version CRDs) and we have CRDs in the wild that are already bumping against the max allowed size (Thank you, Jordan Liggit for explaining this.) Read this2 and this3 for some context. 

  2. code block, where the encoding version for CRDs is configured 

  3. Attempt to bump the storage version from v1beta1 → v1, but was blocked on k/k PR #82292 

February 25, 2025 12:00 AM

slabtop - to check kernel memory usage, and kubelet's container_memory_kernel_usage metrics

Today, I learnt about slabtop1, a command line utility to check the memory used by kernel
(or as its man page says, it displays the kernel slab cache information in real time2).

image


(Logging for the future me)

So, what led to me learning about slabtop? 🙂

Jason Braganza taught me this, while we were preparing for our upcoming conference talk (on Kubernetes Metrics)!

Precisely, following is the metrics (exposed by kubelet component within a kubernetes cluster) that led to the discussion.

# HELP container_memory_kernel_usage Size of kernel memory allocated in bytes.
# TYPE container_memory_kernel_usage gauge
container_memory_kernel_usage{container="",id="/",image="",name="",namespace="",pod=""} 0 1732452865827

And how kubelet gets this “kernel memory allocation” information and feeds to the container_memory_kernel_usage metrics?

The answer is (at least to the best of my understanding) –

The kubelet’s server package imports “github.com/google/cadvisor/metrics”3 (aka the cadvisor/metrics) module.

This cadvisor/metrics go module provides a NewPrometheusCollector() function (which kubelet uses here4).

The NewPrometheusCollector() function take includedMetrics as one of its many parameters.

  r.RawMustRegister(metrics.NewPrometheusCollector(prometheusHostAdapter{s.host}, containerPrometheusLabelsFunc(s.host), includedMetrics, clock.RealClock{}, cadvisorOpts))

And when this includedMetrics contains cadvisormetrics.MemoryUsageMetrics (which it does in the case in question, check here5),

	includedMetrics := cadvisormetrics.MetricSet{
		...
		cadvisormetrics.MemoryUsageMetrics:  struct{}{},
		...
	}

then NewPrometheusCollector() function exposes the container_memory_kernel_usage6 metrics.

func NewPrometheusCollector(i infoProvider, f ContainerLabelsFunc, includedMetrics container.MetricSet, now clock.Clock, opts v2.RequestOptions) *PrometheusCollector {
        ...
        ...
	if includedMetrics.Has(container.MemoryUsageMetrics) {
		c.containerMetrics = append(c.containerMetrics, []containerMetric{
		       ...
		       ...
		       {
				name:      "container_memory_kernel_usage",
				help:      "Size of kernel memory allocated in bytes.",
				valueType: prometheus.GaugeValue,
				getValues: func(s *info.ContainerStats) metricValues {
					return metricValues
				},
			},
        ...
        ...

And as we see above in the definition of container_memory_kernel_usage metrics,
the valueType is prometheus.Gaugevalue (so its a gauge type metrics),
and the value is value: float64(s.Memory.KernelUsage) where KernelUsage is defined here7 and interpreted here8.

I still feel I can go further down a few more steps to find out the true source of this information, but that’s all for now.


  1. which in turn gets the information from /proc/slabinfo

  2. to me it looks something like, top or htop

  3. here: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L39C2-L39C38 

  4. kubelet registring the cadvisor metrics provided by cdvisor’s metrics.NewPrometheusCollector(...) function – https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L463 

  5. Kubelet server package creating a (cadvisor metrics based) set of includedMetrics: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L441-L451 

  6. codeblock adding container_memory_kernel_usage metrics: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/vendor/github.com/google/cadvisor/metrics/prometheus.go#L371-L393 

  7. Cadvisor’s MemoryStats struct providing KernelUsage: https://github.com/google/cadvisor/blob/5bd422f9e1cea876ee9d550f2ed95916e1766f1a/info/v1/container.go#L430-L432 

  8. Cadvisor’s setMemoryStats() function, setting value for KernelUsage: https://github.com/google/cadvisor/blob/5bd422f9e1cea876ee9d550f2ed95916e1766f1a/container/libcontainer/handler.go#L799-L803 

February 24, 2025 12:00 AM

I quit everything

I quit everything

I quit everything

title: I quit everything
published: 2025-02-19

previous [‘Simple blogging engine’] parent directory

I have never been the social media type of person. But that doesn’t mean I don’t want to socialize and get/stay in contact with other people. So although not being a power-user, I always enjoyed building and using my online social network. I used to be online on ICQ basically all my computer time and I once had a rich Skype contact list.

However, ICQ just died because people went away to use other services. I remember how excited I was when WhatsApp became available. To me it was the perfect messenger; no easier way to get in contact and chat with your friends and family (or just people you somehow had in your address book), for free. All of those services I’ve ever been using followed one of two possible scenarios:

  • Either they died because people left for the bigger platform
  • or the bigger platform was bought and/or changed their terms of use to make any further use completely unjustifiable (at least for me)

Quitstory

  • 2011 I quit StudiVZ, a social network that I joined in 2006, when it was still exclusive for students. However, almost my whole bubble left for Facebook so to stay in contact I followed. RIP StudiVZ, we had a great time.
  • Also 2011 I quit Skype, when it was acquired by Microsoft. I was not too smart back then, but I already knew I wanted to avoid Microsoft. It wasn’t hard anyway, most friends had left already.
  • 2017 I quit Facebook. That did cost me about half of my connections to old school friends (or acquaintances) and remote relatives. But the terms of use (giving up all rights on any content to Facebook) and their practices (crawling all my connections to use their personal information against them) made it impossible for me to stay.
  • 2018 I quit WhatsApp. It was a hard decision because, as mentioned before, I was once so happy about this app’s existence, and I was using it as main communication channel with almost all friends and family. But 2014 WhatsApp was bought by Facebook. In 2016 it was revealed that Facebook was combining the data from messenger and Facebook platform for targeted advertising and announced changes on terms of use. For me it was not possible to continue using the app.
  • Also 2018 I quit Twitter. Much too late. It has been the platform that allowed the rise of an old orange fascist, gave him the stage he needed and did by far not enough against false information spreading like crazy. I didn’t need to wait for any whistle blowers to know that the recommendation algorithm was favoring hate speech and miss-information, to know that this platform was not good for my mental health, anyway. I’m glad though, I was gone before the takeover.
  • Also 2018 I quit my Google account. I was using it to run my Android phone, mainly. However, quitting Google never hurt me - syncing my contacts and calendars via cardDAV and calDAV has always been painless. Google circles (which I peeked into for a week or so) never became a think anyway. I started using custom roms (mainly Cyanogen, later lineage OS) for all my phones anyway.
  • 2020 I quit Amazon. Shopping is actually more fun again. I still do online shopping occasionally, most often trying to buy from the manufacturers directly, but if I can I try to do offline shopping in our beautiful city.
  • 2021 I quit smartphone. I just stopped using my phone for almost anything except making and receiving calls. I have tried a whole bunch of things to gain control over the device but found that it was impossible for me. I found that the device had in fact more control over me than vice versa; I had to quit.
  • 2024 I quit Paypal. It’s a shame that our banks cannot come up with a convenient solution, and it’s also a shame I helped to make that disgusting person who happens to own Paypal even richer.
  • Also in 2024 I quit Github. It’s the biggest code repository in the world. I’m sure it’s the biggest hoster of FOSS projects, too. Why? Why sell that to a company like Microsoft? I don’t want to have a Microsoft account. I had to quit.

Stopped using the smartphone

Implications

Call them as you may; big four, big five, GAFAM/FAAMG etc. I quit them all. They have a huge impact on our live, and I think it’s not for the better. They all have shown often enough, that they cannot be trusted; they gather and link all information about us they can lay hands on and use them against us, selling us out for the highest bidding (and the second and third highest, because copying digital data is cheap). I’m not regretting my decisions, but they were not without implications. And in fact I am quite pissed because I don’t think it is my fault that I had to quit. It is something that those big tech companies took from me.

  • I lost contact to a bunch of people. Maybe this is a FOMO kind of thing; it’s not that I was in contact with these distant relatives or acquaintances, but I had a low threshold of reaching out. Not so much, anymore.
  • People are reacting angrily if they find they cannot reach me. I am available via certain channels, but a lot of people don’t understand my reasoning to not join the big networks. As if I was trying to make their lives more complicated as necessary.
  • I can’t do OAuth. If online platforms don’t implement their own login and authentication but instead rely on identification via the big IdPs, I’m out. Means I will probably not be able to participate in Advent of Code this year. It’s kind of sad.
  • I’m the last to know. Not being in that WhatsApp group, and not reading the Signal message about the meeting cancellation 5 minutes before scheduled start (because I don’t have Signal on my phone), does have that effect. There has been a certain engagement once, when you agreed to something or scheduled a meeting etc. But these days, everything can be changed and cancelled just minutes before some appointment with a single text message. I feel old(fashioned) when trusting in others’ engagement, but I don’t want to give it up, yet.

Of course there is still potential to quit even more: I don’t have a Youtube account (of course) but I still watch videos there. I do have a Netflix subscription, and cancelling that would put me into serious trouble with my family. I’m also occasionally looking up locations on Google maps, but only if I want to look at the satellite pictures.

However, the web is becoming more and more bloated with ads and trackers, old pages that were fun to browse in the earlier days of the web have vanished; it’s not so much fun to use anymore. Maybe the HTTP/S will be the next thing for me to quit.

Conclusions

I’m still using the internet to read my news, to connect with friends and family and to sync and backup all the stuff that’s important to me. There are plenty of alternatives to big tech that I have found work really well for me. The recipe is almost always the same: If it’s open and distributed, it’s less likely to fall into the hands of tech oligarchs.

I’m using IRC, Matrix and Signal for messaging, daily. Of those, Signal may have the highest risk of disappointing me one day, but I do have faith. Hosting my own Nextcloud and Email servers has to date been a smooth and nice experience. Receiving my news via RSS and atom feeds gives me control over the sources I want to expose myself to, without being flooded with ads.

I have tried Mastodon and other Fediverse networks, but I was not able to move any of my friends there to make it actual fun. As mentioned, I’ve never been too much into social media, but I like(d) to see some vital signs of different people in my life from time to time. I will not do bluesky, as I cannot see how it differs from those big centralized platforms that have failed me.

It’s not a bad online-life and after some configuration it’s no harder to maintain than any social media account, too. I only wish it wouldn’t have been necessary for me to walk this path. The web could have developed much differently, and be an open and welcoming space for everyone today. Maybe we’ll get there someday.

February 19, 2025 12:00 AM

Simple blogging engine

Simple blogging engine

Simple blogging engine

title: Simple blogging engine
published: 2025-02-18

previous [‘Blog Questions Challenge 2025’] next [‘I quit everything’] parent directory

As mentioned in the previous post, I have been using several frameworks for blogging. But the threshold to overcome to begin and write new articles were always too hight to just get started. Additionally, I’m getting more and more annoyed by the internet, or specifically browsing the www via HTTP/S. It’s beginning to feel like hard work to not get tracked everywhere and to not support big tech and their fascist CEOs by using their services. That’s why I found the gemini protocol interesting ever since I got to know about it. I wrote about it before:

Gemini blog post

That’s why I decided to not go for HTTPS-first in my blog, but do gemini first. Although you’re probably reading this as the generated HTML or in your feed reader.

Low-threshold writing

To just get started, I’m now using my tmux session that is running 24/7 on my home server. It’s the session I open by default on all my devices, because it contains my messaging (IRC, Signal, Matrix) and news (RSS feeds). Not it also contains a neovim session that let’s me just push all my thoughts into text files easily and everywhere.

Agate

The format I write in is gemtext, a markup language that is even simpler as Markdown. Gemtext allows three different headings, links, lists, blockquotes and formatted text, and that’s it. And to make my life even easier, I only need to touch a file .directory-listing-ok to let agate create an autoindex of each directory, so I don’t have to take care about house-keeping and linking my articles too much. I just went with this scheme to make sure my posts appear in a correct order:

blog
└── 2025
    ├── 1
    │   └── index.gmi
    └── 2
        └── index.gmi

When pointed to a directory, agate will automatically serve the index.gmi if it finds one.

To serve the files in my gemlog, I just copy them as is, using rsync. If anyone would browse the gemini space I would be done at this point. I’m using agate, a gemini server written in Rust, to serve the static blog. Technically, gemini would allow more than that, using cgi to process requests and dynamically return responses, but simple is just fine.

The not-so-low publishing threshold

However, if I ever want any person to actually read this, sadly I will have to offer more than gemtext. Translating everything into HTML and compiling an atom.xml comes with some more challenges. Now I will need some metadata like title and date. For now I’m just going to add that as formatted text at the beginning of each file I want to publish. The advantage is, that I can filter out files I want to keep private this way. Using ripgrep I just find all files with the published directive and pipe them through my publishing script.

To generate the HTML, I’m going the route gemtext -> markdown -> html, in lack of better ideas. Gemtext to Markdown is trivial, I only need to format the links (using sed in my case). To generate the HTML I use pandoc, although it’s way too powerful and not-lightweight for this task. But I just like pandoc. I’m adding simple.css to I don’t have to fuddle around with any design questions.

Simplecss

I was looking for an atom feed generator, until I noticed how easily this file can be generated manually. Again, a little bit of ripgrep and bash leaves me with an atom.xml that I’m actually quite happy with.

The yak can be shaved until the end of the world

I hope I have put everything out of the way get started easily and quickly. I could configure the system until the end of time to make unimportant things look better, but I don’t want to fall into that trap (again). I’m going to publish my scripts to a public repository soon, in case anyone feels inspired to go a similar route.

February 18, 2025 12:00 AM

Blog Questions Challenge 2025

Blog Questions Challenge 2025

Blog Questions Challenge 2025

title: Blog Questions Challenge 2025
published: 2025-02-18
tags: blogging

previous [‘What is stopping us from using free software?’] next [‘Simple blogging engine’] parent directory

I’m picking up the challenge from Jason Braganza. If you haven’t, go visit his blog and subscribe to the newsletter ;)

Jason’s Blog

1. Why did you make the blog in the first place?

That’s been the first question I asked myself when starting this blog. It was part of the DGPLUG #summertraining and I kind of started without actually knowing what to do with it. But I did want to have my own little corner in cyberspace.

Why another blog?

2. What platform are you using to manage your blog and why did you choose it?

I have a home server running vim in a tmux session. The articles are written as gemtext as I have decided that my gemlog should be the source of truth for my blog. I’ve written some little bash scripts to convert everything to html and atom feed as well, but I’m actually not very motivated anymore to care for website design. Gemtext is the simplest markup language I know and to keep it simple makes the most sense to me.

Gemtext

3. Have you blogged on other platforms before?

I started writing on wordpress.com; without running my own server it has been the most accessible platform to me. When moving to my own infrastructure I used Lektor, a static website generator framework written in Python. It has been quite nice and powerful, but in the end I wanted to get rid of the extra dependencies and simplify even more.

Lektor

4. How do you write your posts?

Rarely. If I write, I just write. Basically the same way I would talk. There were a very few posts when I did some research because I wanted to make it a useful and comprehensive source for future look-ups, but in most cases I’m simply too lazy. I don’t spend much time on structure or thinking about how to guide the reader through my thoughts, it’s just for me and anyone who cares.

5. When do you feel most inspired to write?

Always in situations when I don’t have the time to write, never when I do have the time. Maybe there’s something wrong with me.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

Yes, mostly. I do have a couple of posts that I didn’t publish immediately, so they are still not published. I find it hard to re-iterate my own writing, so I try to avoid it by publishing immediately :)

7. Your favorite post on your blog?

The post I was looking up myself most often is the PostgreSQL migration thing. It was a good idea to write that down ;)

Postgres migration between multiple instances

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

I just did a major refactoring of the system, basically doing everying manually now. It forces me to keep things simple, because I think it should be simple to write and publish a text online. I also hope to have lowered the threshold for me to start writing new posts. So piloting the current system, it is.

February 18, 2025 12:00 AM

pass using stateless OpenPGP command line interface

Yesterday I wrote about how I am using a different tool for git signing and verification. Next, I replaced my pass usage. I have a small patch to use stateless OpenPGP command line interface (SOP). It is an implementation agonostic standard for handling OpenPGP messages. You can read the whole SPEC here.

Installation

cargo install rsop rsop-oct

And copied the bash script from my repository to the path somewhere.

The rsoct binary from rsop-oct follows the same SOP standard but uses the card to signing/decryption. I stored my public key in ~/.password-store/.gpg-key file, which is in turn used for encryption.

Usage

Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)

February 12, 2025 05:26 AM

Using openpgp-card-tool-git with git

One of the power of Unix systems comes from the various small tools and how they work together. One such new tool I am using for some time is for git signing & verification using OpenPGP and my Yubikey for the actual signing operation via openpgp-card-tool-git. I replaced the standard gpg for this usecase with the oct-git command from this project.

Installation & configuration

cargo install openpgp-card-tool-git

Then you will have to configuration your (in my case the global configuration) git configuration.

git config --global gpg.program <path to oct-git>

I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.

git config --global commit.gpgsign true
git config --global tag.gpgsign true

Usage

Before you start using it, you want to save the pin in your system keyring.

Use the following command.

oct-git --store-card-pin

That is it, now your git commit will sign the commits using oct-git tool.

In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.

February 11, 2025 11:12 AM

KubeCon + CloudNativeCon India 2024

Banner with KubeCon and Cloud Native Con India logos

Conference attendance had taken a hit since the onset of the COVID-19 pandemic. Though I attended many virtual conferences and was glad to present at a few like FOSDEM - a conference I had always longed to present at.

Sadly, the virtual conferences did not have the feel of in-person conferences. With 2024 here and being fully vaccinated, I started attending a few in-person conferences. The year started with FOSSASIA in Hanoi, Vietnam, followed by a few more over the next few months.

December 2024 was going to be special as we were all waiting for the first edition of KubeCon + CloudNativeCon in India. I had planned to attend the EU/NA editions of the conference, but visa issues made those more difficult to attend. As fate would have it, India was the one planned for me.

KubeCon + CloudNativeCon India 2024 took place in the capital city, Delhi, India from 11th - 12th December 2024, along with co-events hosted at the same venue, Yashobhoomi Convention Centre on 10th December 2024.

Venue

Let’s start with the venue. As a conference organizer for other conferences, the thing that blew my mind was the venue, YASHOBHOOMI (India International Convention and Expo Centre). The conference venue was huge to accommodate large scale conferences, and I also got to know that the convention centre is still in progress and there are more halls to come. If I heard correctly, there was another parallel conference running in the venue around the same time.

Now, let’s jump to the conference.

Maintainer Summit

The first day of the conference, 10th December 2024, was the CNCF Maintainers Summit. The event is exclusive for people behind CNCF projects, providing space to showcase their projects and meet other maintainers face-to-face.

Due to the chilly and foggy morning, the event started a bit late to accommodate more participants for the very first talk. The event had a total of six talks, including the welcome note. Our project, Flatcar Container Linux, also had a talk accepted: “A Maintainer’s Odyssey: Time, Technology and Transformation”.

This talk took attendees through the journey of Flatcar Container Linux from a maintainer’s perspective. It shared Flatcar’s inspiration - the journey from a “friendly fork” of CoreOS Container Linux to becoming a robust, independent, container-optimized Linux OS. The beginning of the journey shared the daunting red CI dashboard, almost-zero platform support, an unstructured release pipeline, a mammoth list of outdated packages, missing support for ARM architecture, and more – hardly a foundation for future initiatives. The talk described how, over the years, countless human hours were dedicated to transforming Flatcar, the initiatives we undertook, and the lessons we learned as a team. A good conversation followed during the Q&A with questions about release pipelines, architectures, and continued in the hallway track.

During the second half, I hosted an unconference titled “Special Purpose Operating System WG (SPOS WG) / Immutable OSes”. The aim was to discuss the WG with other maintainers and enlighten the audience about it. During the session, we had a general introduction to the SPOS WG and immutable OSes. It was great to see maintainers and users from Flatcar, Fedora CoreOS, PhotonOS, and Bluefin joining the unconference. Since most attendees were new to Immutable OSes, many questions focused on how these OSes plug into the existing ecosystem and the differences between available options. A productive discussion followed about the update mechanism and how people leverage the minimal management required for these OSes.

I later joined the Kubeflow unconference. Kubeflow is a Kubernetes-native platform that orchestrates machine learning workflows through custom controllers. It excels at managing ML systems with a focus on creating independent microservices, running on any infrastructure, and scaling workloads efficiently. Discussion covered how ML training jobs utilize batch processing capabilities with features like Job Queuing and Fault Tolerance - Inference workloads operate in a serverless manner, scaling pods dynamically based on demand. Kubeflow abstracts away the complexity of different ML frameworks (TensorFlow, PyTorch) and hardware configurations (GPUs, TPUs), providing intuitive interfaces for both data scientists and infrastructure operators.

Conference Days

During the conference days, I spent much of my time at the booth and doing final prep for my talk and tutorial.

On the maintainers summit day, I went to check the room for the conference days, but discovered that room didn’t exist in the venue. So, on the conference days, I started by informing the organizers about the schedule issue. Then, I proceeded to the keynote auditorium, where Chris Aniszczyk, CTO, Linux Foundation (CNCF), kicked off the conference by sharing updates about the Cloud Native space and ongoing initiatives. This was followed by Flipkart’s keynote talk and a wonderful, insightful panel discussion. Nikhita’s keynote on “The Cloud Native So Far” is a must-watch, where she talked about CNCF’s journey until now.

After the keynote, I went to the speaker’s room, prepared briefly, and then proceeded to the community booth area to set up the Flatcar Container Linux booth. The booth received many visitors. Being alone there, I asked Anirudha Basak, a Flatcar contributor, to help for a while. People asked all sorts of questions, from Flatcar’s relevance in the CNCF space to how it works as a container host and how they could adapt Flatcar in their infrastructure.

Around 5 PM, I wrapped up the booth and went to my talk room to present “Effortless Clustering: Rethinking ClusterAPI with Systemd-Sysext”. The talk covered an introduction to systemd-sysext, Flatcar & Cluster API. It then discussed how the current setup using Image Builder poses many infrastructure challenges, and how we’ve been utilizing systemd to resolve these challenges and simplify using ClusterAPI with multiple providers. The post-talk conversation was engaging, as we discussed sysext, which was new to many attendees, leading to productive hallway track discussions.

Day 2 began with me back in the keynote hall. First up were Aparna & Sumedh talking about Shopify using GenAI + Kubernetes for workloads, followed by Lachie sharing the Kubernetes story with Mandala and Indian contributors as the focal point. As an enthusiast photographer, I particularly enjoyed the talk presented through Lachie’s own photographs.

Soon after, I proceeded to my tutorial room. Though I had planned to follow the Flatcar tutorial we have, the AV setup broke down after the introductory start, and the session turned into a Q&A. It was difficult to regain momentum. The middle section was filled mostly with questions, many about Flatcar’s security perspective and its integration. After the tutorial wrapped up, lunch time was mostly taken up by hallway track discussions with tutorial attendees. We had the afternoon slot on the second day for the Flatcar booth, though attendance decreased as people began leaving for the conference’s end. The range of interactions remained similar, with some attendees from talks and workshops visiting the booth for longer discussions. I managed to squeeze in some time to visit the Microsoft booth at the end of the conference.

Overall, I had an excellent experience, and kudos to the organizers for putting on a splendid show.

Takeaways

Being at a booth representing Flatcar for the first time was a unique experience, with a mix of people - some hearing about Flatcar for the first time and confusing it with container images, requiring explanation, and others familiar with container hosts & Flatcar bringing their own use cases. Questions ranged from update stability to implementing custom modifications required by internal policies, SLSA, and more. While I’ve managed booths before, this was notably different. Better preparation regarding booth displays, goodies, and Flatcar resources would have been helpful.

The talk went well, but presenting a tutorial was a different experience. I had expected hands-on participation, having recently conducted a successful similar session at rootconf. However, since most KubeCon attendees didn’t bring computers, I plan to modify my approach for future KubeCon tutorials.

At the booth, I also received questions about WASM + Flatcar, as Flatcar was categorized under WASM in the display.


Credits in the photos goes to CNCF posted in the Kubecon + CloudNativeCon India 2024 Flickr album & to @vipulgupta.travel

February 05, 2025 12:00 AM

Pixelfed on Docker

I am running a Pixelfed instance for some time now at https://pixel.kushaldas.photography/kushal. This post contains quick setup instruction using docker/containers for the same.

screenshot of the site

Copy over .env.docker file

We will need .env.docker file and modify it as required, specially the following, you will have to write the values for each one of them.

APP_NAME=
APP_DOMAIN=
OPEN_REGISTRATION="false"   # because personal site
ENFORCE_EMAIL_VERIFICATION="false" # because personal site
DB_PASSWORD=

# Extra values to db itself
MYSQL_DATABASE=
MYSQL_PASSWORD=
MYSQL_USER=

CACHE_DRIVER="redis"
BROADCAST_DRIVER="redis"
QUEUE_DRIVER="redis"
SESSION_DRIVER="redis"

REDIS_HOST="redis"

ACITIVITY_PUB="true"

LOG_CHANNEL="stderr"

The actual docker compose file:

---

services:
  app:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    depends_on:
      - db
      - redis
    # The port statement makes Pixelfed run on Port 8080, no SSL.
    # For a real instance you need a frontend proxy instead!
    ports:
      - "8080:80"

  worker:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    entrypoint: /worker-entrypoint.sh
    depends_on:
      - db
      - redis
      - app
    healthcheck:
      test: php artisan horizon:status | grep running
      interval: 60s
      timeout: 5s
      retries: 1

  db:
    image: mariadb:11.2
    restart: unless-stopped
    env_file:
      - ./.env
    environment:
      - MYSQL_ROOT_PASSWORD=CHANGE_ME
    volumes:
      - "/data/db-data:/var/lib/mysql"

  redis:
    image: zknt/redis
    restart: unless-stopped
    volumes:
      - "redis-data:/data"

volumes:
  redis-data:

I am using nginx as the reverse proxy. Only thing to remember there is to pass .well-known/acme-challenge to the correct directory for letsencrypt, the rest should point to the contianer.

January 31, 2025 05:44 AM

Dealing with egl_bad_alloc error for webkit

I was trying out some Toga examples, and for the webview I kept getting the following error and a blank screen.

Could not create EGL surfaceless context: EGL_BAD_ALLOC.

After many hours of searching I reduced the reproducer to a simple Python Gtk code.

import gi

gi.require_version('Gtk', '3.0')
gi.require_version('WebKit2', '4.0')

from gi.repository import Gtk, WebKit2

window = Gtk.Window()
window.set_default_size(800, 600)
window.connect("destroy", Gtk.main_quit)

scrolled_window = Gtk.ScrolledWindow()
webview = WebKit2.WebView()
webview.load_uri("https://getfedora.org")
scrolled_window.add(webview)

window.add(scrolled_window)
window.show_all()
Gtk.main()

Finally I asked for help in #fedora IRC channel, within seconds Khaytsus gave me the fix:

WEBKIT_DISABLE_COMPOSITING_MODE=1 python g.py

working webview

January 18, 2025 07:43 AM

About

I’m Nabarun Pal, also known as palnabarun or theonlynabarun, a distributed systems engineer and open source contributor with a passion for building resilient infrastructure and fostering collaborative communities. Currently, I work on Kubernetes and cloud-native technologies, contributing to the ecosystem that powers modern distributed applications.

When I’m not deep in code or community discussions, you can find me planning my next adventure, brewing different coffee concoctions, tweaking my homelab setup, or exploring new mechanical keyboards. I believe in the power of open source to democratize technology and create opportunities for everyone to contribute and learn.

A detailed view of my speaking engagements are in the /speaking page.

January 06, 2025 12:00 AM

Keynote at PyLadiesCon!

Since the very inception of my journey in Python and PyLadies, I have always thought of having a PyLadies Conference, a celebration of PyLadies. There were conversations here and there, but nothing was fruitful then. In 2023, Mariatta, Cheuk, Maria Jose, and many more PyLadies volunteers around the globe made this dream come true, and we had our first ever PyLadiesCon.
I submitted a talk for the first-ever PyLadiesCon (how come I didn&apost?), and it was rejected. In 2024, I missed the CFP deadline. I was sad. Will I never be able to participate in PyLadiesCon?

On October 10th, 2024, I had my talk at PyCon NL. I woke up early to practice. I saw an email from PyLadiesCon, titled "Invitation to be a Keynote Speaker at PyLadiesCon". The panic call went to Kushal Das. "Check if there is any attack in the Python server? I got a spamming email about PyLadiesCon and the address is correct. "No, nothing.", replied Kushal after checking. Wait then "WHAT???". PyLadiesCon wants me to give the keynote. THE KEYNOTE in PyLadiesCon.

Thank you Audrey for conceptualizing and creating PyLadies, our home.

keynote_pyladiescon.png

And here I am now. I will give the keynote on 7 December 2024 at PyLadiesCon on how PyLadies gave me purpose. See you all there.

Dreams do come true.

by Anwesha Das at November 29, 2024 05:35 PM

Looking back to Euro Python 2024

Over the years, when  I am low, I always go to the 2014 Euro Python talk  "Farewell and Welcome Home: Python in Two Genders" by Naomi. It has become the first step of my coping mechanism and the door to my safe house. Though 2024 marked my Euro Python journey in person, I had a long connection and respect for the conference. A conference that believes community matters, human values and feelings matter, and not afraid to walk the talk. And how the conference stood up to my expectations in every bit.

euro_python_3.jpeg

My Talk: Intellectual Property Law 101

I had my talk on Intellectual Property Law, on the first day. After a long time, I was giving a talk on the legal topic. This talk was dedicated to the developers. So, I concentrated on only those issues which concerned the developers. Tried to stitch the concerned topics Patent, Trademarks, and Copyright together. For the smooth flow of the talk, since it becomes easier for the developers to understand and remember for all the practical purposes for future use. I was concerned if I would be able to connect with people. Later, people came to  me with several related questions, starting from

  • Why should I be concerned about patents?

  • Which license would fit my project?

  • Should I be scared about any Trademarks granted to other organizations under some other jurisdiction?

So on and so forth. Though I could not finish the whole talk due to time constraints, I am happy with the overall review.

Panel: Open Source Sustainability

On Day 1 of the main conference, we had the panel on Open Source Sustainability. This topic lies at the core of open-source ecosystem sustainability for the projects and community for the future and stability. The panel had Deb Nicholson, Armin Ronacher Çağıl Uluşahin Sönmez,Deb Nicholson, Samuel Colvin, and me and Artur Czepiel as  the moderator.  I was happy to represent my community&aposs side. It was a good discussion, and hopefully, we could give answers to some questions of the community in general.

Birds of Feather session: Open Source Release Management

This Birds of Feathers (BoF) session is intended to deal with the Release Management of various Open Source projects, irrespective of their size. The discussion includes all projects, from a community-led project to projects maintained/initiated by big enterprises, from a project maintained by one contributor to a project with several hundred contributors.

  • What methods do we follow regarding versioning, release cadence, and the process?

  • Do most of us follow manual processes or depend on automated ones?

  • What works and what does not, and how can we improve our lives?

  • What are the significant points that make the difference?

We discussed and covered the following topics: different aspects of release management of Open-Source projects, security, automation, CI usage, and documentation. We followed the Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

PyLadies Lunch

And then comes my favorite part of the conference: PyLadies Lunch. It was my seventh PyLadies lunch, and I was moderating it for the fifth time. But this time, my wonderful friends [Laís] and Çağıl were by my side, holding me up when I failed. I love every time I am at a PyLadies lunch. This is where I get my strength, energy, and love.

Workshop

I attended two workshops organized by Anezka Muller , Mia Bajić and all amazing PyLadies organizers

  • Self-defense workshop where the moderators helped us navigate challenging situations we face in life, safeguard ourselves from them, and overcome them.

  • I AM Remarkable workshop, where we learned to tell people about our successes.

Representing Ansible Community

I always take the chance to meet the Ansible community members face-to-face. Euro Python gave me another opportunity to do that. I learned about different user stories that we do not get to hear from our work corners, and I learned about these unique problems and their solutions in Ansible. 
Fun fact : Maarten gave a review after knowing I am Anwesha from the Ansible project. He said, &aposCan you Ansible people slow down in releasing new versions of Ansible? Every time we get used to it, we have a new version.&apos

euro_python_1.jpeg

Acknowledging mental health issues

The proudest moment for me personally was when I acknowledged my mental health issues and later when people came to me saying how they relate to me and how they felt empowered when I mentioned this.

euro_python_2.jpeg

PyLadies network at Red Hat

A network of PyLadies within Red Hat has been my dream since I joined Red Hat. She also agreed when I shared this with Karolina at last year&aposs DevConf. And finally, we initiated on day 2 of the conference. We are so excited for the future to come.

Meeting friends

Conference means friends. It was so great to meet so many friends after such a long time Tylor, Nicholas, Naomi, Honza, Carol, Mike, Artur, Nikita, Valerio and many new ones Jannis Joana,[Chirstian], Martina Tereza , Maria, Alyona, Mia, Naa , Bojanand Jodie. A special note of love to Jodie, you to hold my hand and take me out of the dark.

euro_python_4.jpeg

The best is saved for the last. Euro Python 2024 made 3 of my dreams come true.

  • Gender Neutral Washrooms

  • Sanitary products in restrooms (I remember carrying sanitary napkins in my bag pack in PyCon India and telling girls if they needed it, it was available in the PyLadies booth).

  • Neo-diversity bag (which saved me at the conference; thank you, Karolina, for this)

euro_python_0.jpeg

I cannot wait for the next Euro Python; see you all at Euro Python 2025.

PS: Thanks to Lias, I will always have a small piece of Euro Python 2024 with me. I know I am loved and cared for.

by Anwesha Das at July 17, 2024 11:42 AM

Euro Python 2024

It is July, and it is time for Euro Python, and 2024 is my first Euro Python. Some busy days are on the way. Like every other conference, I have my diary, and the conference days are full of various activities.

euro_travel_0.jpeg

Day 0 of the main conference

After a long time, I will give a legal talk. We are going to dig into some basics of Intellectual Property. What is it? Why do we need it? What are the different kinds of intellectual property? It is a legal talk designed for developers. So, anyone and everyone from the community with previous knowledge can understand the content and use it to understand their fundamental rights and duties as developers.Intellectual Property 101, the talk is scheduled at 11:35 hrs.

Day 1 of the main conference

Day 1 is PyLadies Day, a day dedicated to PyLadies. We have crafted the day with several different kinds of events. The day opens with a self-defense workshop at 10:30 hrs. PyLadies, throughout the world, aims to provide and foster a safe space for women and friends in the Python Community. This workshop is an extension of that goal. We will learn how to deal with challenging, inappropriate behavior.
In the community, at work, or in any social space. We will have a trained Psychologist as a session guide to help us. This workshop is so important, especially today as it was yesterday and may be in the future (at least until the enforcement of CoC is clear). I am so looking forward to the workshop. Thank you, Mia, Lias and all the PyLadies for organizing this and giving shape to my long-cherished dream.

Then we have my favorite part of the conference, PyLadies Lunch. I crafted the afternoon with a little introduction session, shout-out session, food, fun, laughter, and friends.

After the PyLadies Lunch, I have my only non-PyLadies session, which is a panel discussion on Open Source Sustainability. We will discuss the different aspects of sustainability in the open source space and community.

Again, it is PyLady&aposs time. Here, we have two sessions.

[IAmRemarkable](https://ep2024.europython.eu/pyladies-events#iamremarkable), to help you learn to empower you by celebrating your achievements and to fight your impostor syndrome. The workshop will help you celebrate your accomplishments and improve your self-promotion skills.

The second session is a 1:1 mentoring event, Meet & Greet with PyLadies. Here, the willing PyLadies will be able to mentor and be mentored. They can be coached in different subjects, starting with programming, learning, things related to job and/or career, etc.

Birds of feather session on Release Management of Open Source projects

It is an open discussion related to the release Management of the Open Source ecosystem.
The discussion includes everything from a community-led project to projects maintained/initiated by a big enterprise, a project maintained by one contributor to a project with several hundreds of contributor bases. What are the different methods we follow regarding versioning, release cadence, and the process itself? Do most of us follow manual processes or depend on automated ones? What works and what does not, and how can we improve our lives? What are the significant points that make the difference? We will discuss and cover the following topics: release management of open source projects, security, automation, CI usage, and documentation. In the discussion, I will share my release automation journey with Ansible. We will follow Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

So, here comes the days of code, collaboration, and community. See you all there.

PS: I miss my little Py-Lady volunteering at the booth.

by Anwesha Das at July 08, 2024 09:56 AM

A Tragic Collision: Lessons from the Pune Porsche Accident

I’m writing a blog after a very long time, as I kept procrastinating, but today I decided to write about something important and yes, it is a hot topic in the country right now. In Pune, a 17-year-old boy was driving a Porsche while under the influence of alcohol. As I read in the news, he was speeding, and while speeding, his car hit a two-wheeler vehicle, resulting in the death of two young people who were techies.
June 03, 2024 11:39 AM

Making my first OnionShare release

One of the biggest bottlenecks in maintaining the OnionShare desktop application has been packaging and releasing the tool. Since OnionShare is a cross platform tool, we need to ensure that release works in most different desktop Operating Systems. To know more about the pain that goes through in making an OnionShare release, read the blogs[1][2][3] that Micah Lee wrote on this topic.

However, one other big bottleneck in our release process apart from all the technical difficulties is that Micah has always been the one making the releases, and even though the other maintainers are aware of process, we have never actually made a release. Hence, to mitigate that, we decided that I will be making the OnionShare 2.6.1 release.

PS: Since Micah has written pretty detailed blogs with code snippets, I am not going to include much code snippets (unless I made significant changes) to not lengthen this already long code further. I am going to keep this blog more like a narrative of my experience.

Getting the hardwares ready

Firstly, given the threat model of OnionShare, we decided that it is always good to have a clean machine to do the OnionShare release works, especially the signing part of things. Micah has already automated a lot of the release process using GitHub Actions over the years, but we still need to build the Apple Silicon versions of OnionShare manually and then merge with the Intel version to create a univeral2 app bundle.

Also, in general, it's a good practise to have and use the signing keys in a clean machine for a projective as sensitive as OnionShare that is used by people with high threat models. So I decided to get a new Macbook for the same. This would help me build the Apple Silicon version as well as sign the packages for the other Operating Systems.

Also, I received the HARICA signing keys from Glenn Sorrentino that is needed for signing the Windows releases.

Fixing the bugs, merging the PRs

After the 2.6.1-dev release was created, we noticed some bugs that we wanted to fix before making the 2.6.1. We fixed, reviewed and merged most of those bugs. Also, there were few older PRs and documentation changes from contributors that I wanted to be merged before making the release.

Translations

Localization is an important part of OnionShare since it enables users to use OnionShare in the language they are most comfortable with. There were quite some translation PRs. Also, emmapeel2 who always helps us with weblate wizardry, made certain changes in the setup, which I also wanted to include in this release.

After creating the release PR, I also need to check which languages are greater than 90% translated, and make a push to hopefully making some more languages pass that threshold, and finally make the OnionShare release with only the languages that cross that threshold.

Making the Release PR

And, then I started making the release PR. I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.

Updating dependencies in Snapcraft

Updating the poetry dependencies went pretty smoothly.

There was nothing much to update in the pluggable transport scripts as well.

But then I started updating and packaging for Snapcraft and Flatpak. Updating tor versions to the latest went pretty smoothly. In snapcraft, the python dependencies needed to be compared manually with the pyproject.toml. I definitely feel like we should automate this process in future, but for now, it wasn't too bad.

But trying to build snap with snapcraft locally just was not working for me in my system. I kept getting lxd errors that I was not fully sure what to do about. I decided to move ahead with flatpak packaging and wait to discuss the snapcraft issue with Micah later. I was satisfied that at least it was building through GitHub Actions.

Updating dependencies in Flatpak

Even though I read about the hardship that Micah had to go through with updating pluggable transports and python dependencies in flatpak packaging, I didn't learn my lesson. I decided, let's give it a try. I tried updating the pluggable transports and faced the same issue that Micah did. I tried modifying the tool, even manually updating the commits, but something or the other failed.

Then, I moved on to updating the python dependencies for flatpak. The generator code that Micah wrote for desktops worked perfectly, but the cli gave me pain. The format in which the dependencies were getting generated and the existing formats were not matching. And I didn't want to be too brave and change the format, since flatpak isn't my area of expertise. But, python kind of is. So I decided to check if I can update the flatpak-poetry-generator.py files to work. And I managed to fix that!

That helped me update the dependencies in flatpak.

MacOS and Windows Signing fun!

Creating Apple Silicon app bundle

As mentioned before, we still need to create an Apple Silicon bundle and then merge it with the Intel build generated from CI to get the universal2 app bundle. Before doing that, need to install the poetry dependencies, tor dependencies and the pluggable transport dependencies.

And I hit an issue again: our get-tor.py script is not working.

The script failed to verify the Tor Browser version that we were downloading. This has happened before, and I kind of doubted that Tor PGP script must have expired. I tried verifying manually and seems like that was the case. The subkey used for signing had expired. So I downloaded the new Tor Browser Developers signing keys, created a PR, and seems like I could download tor now.

Once that was done, I just needed to run:

/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac
rm -rf build/OnionShare.app/Contents/Resources/lib
mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build

And amazingly, it built successfully in the very first try! That was easy! Now I just need to merge the Intel app bundle and the Silicon app bundle and everything should work (Spoiler alert: It doesn't!).

Once the app bundle was created, it was time to sign and notarize. However, the process was a little difficult for me to do since Micah had previously used an individual account. So I passed on the universal2 bundle to him and moved on to signing work in Windows.

Signing the Windows package

I had to boot into my Windows 11 VM to finish the signing and making the windows release. Since this was the first time I was doing the release, I had to first get my VM ready by installing all the dependencies needed for signing and packaging. I am not super familiar with Windows development environment so had to figure out adding PATH and other such things to make all the dependencies work. The next thing to do was setting up the HARICA smart card.

Setting up the HARICA smart card

Thankfully, Micah had already done this before so he was able to help me out a bit. I had to log into the control panel, download and import certificates to my smart card and change the token password and administrator password for my smart card. Apart from the UI of the SafeNet client not being the best, everything else went mostly smoothly.

Since Micah had already made some changes to fix the code signing and packaging stuff, it went pretty smooth for me and I didn't face much obstructions. Science & Design, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare and hence the package now gets signed under the name of Science and Design Inc.

Meanwhile, Micah had got back to me saying that the universal2 bundle didn't work.

So, the Apple Silicon bundle didn't work

One of the mistakes that I made was I didn't test my Apple Silicon build. I thought I will test it once it is signed and notarized. However, Micah confirmed that even after signing and notarizing, the universal2 build is not working. It kept giving segmentation fault. Time to get back to debugging.

Downgrading cx-freeze to 6.15.9

The first thought that came to my mind was, Micah had made a dev build in October 2023. So the cx-freeze release from that time should still be building correctly. So I decided to try and do build (instead of bdist_mac) with the cx-freeze version at that time (which was 6.15.9) and check if the binary created works. And thankfully, that did work. I tried with 6.15.10 and it didn't. So I decided to stick to 6.15.9.

So let's try now running bdist_mac, create a .app bundle and hopefully everything will work perfectly! But nope! The command failed with:

OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory

So now I had a decision to make, should I try to monkey-patch this and just figure out how to fix this or try to make the latest cx-freeze work. I decided to give the latest cx-freeze (version 6.15.15) another try.

Trying zip_include_packages

So, one thing I noticed we were doing differently than what cx-freeze documentation and examples for PySide6 mentioned was we put our dependencies in packages, instead of zip_include_packages in the setup options.

    "build_exe": {
        "packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I thought, let's try moving all of the depencies into zip_include_packages from packages. Basically zip_include_packages includes the dependencies in the zip file, whereas packages place them in the file system and not the zip file. My guess was, the Apple Silicon configuration of how a .app bundle should be structured has changed. So the new options looked something like this:

    "build_exe": {
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I created a build using that, ran the binary, and it gave an error. But I was happy, because it wasn't segmentation fault. The error mainly because it was not able to import some functions from onionshare_cli. So as a next step, I decided to move everything apart from onionshare and onionshare_cli to zip_include_packages. It looked something like this:

    "build_exe": {
        "packages": [
            "onionshare",
            "onionshare_cli",
        ],
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

This almost worked. Problem was, PySide 6.4 had changed how they deal with ENUMs and we were still using deprecated code. Now, fixing the deprecations would take a lot of time, so I decided to create an issue for the same and decided to deal with it after the release.

At this point, I was pretty frustrated, so I decided to do, what I didn't want to do. Just have both packages and zip_include_packages. So I did that, build the binary and it worked. I decided to make the .app bundle. It worked perfectly as well! Great!

I was a little worried that adding the dependencies in both packages and zip_include_packages might increase the size of the bundle, but surprisingle, it actually decreased the size compared to the dev build. So that's nice! I also realized that I don't need to replace the lib directory inside the .app bundle anymore. I ran the cleanup code, hit some FileNotFoundError, tried to find if the files were now in a different location, couldn't find them, decided to put them in a try-except block.

After that, I merged the silicon bundle with Intel bundle to create the universal2 bundle again, sent to Micah for signing, and seems like everything worked!

Creating PGP signature for all the builds

Now that we had all the build files ready, I tried installing and running them all, and seems like everything is working fine. Next, I needed to generate PGP signature for each of the build files and then create a GitHub release. However, Micah is the one who has always created signatures. So the options for us now were:

  • create an OnionShare GPG key that everyone uses
  • sign with my GPG and update documentations to reflect the same

The issue with creating a new OnionShare GPG key was distribution. The maintainers of OnionShare are spread across timezones and continents. So we decided to create signature with my GPG and update the documentation on how to verify the downloads.

Concluding the release

Once the signatures were done, the next steps were mostly straightforward:

  • Create a GitHub release
  • Publish onionshare-cli on PyPi
  • Push the build and signatures to the onionshare.org servers and update the website and docs
  • Create PRs in Flathub and Homebrew cask
  • Make the snapcraft edge to stable

The above went pretty smooth without much difficulty. Once everything was merged, it was time to make an announcement. Since Micah has been doing the announcements, we decided to stick with that for the release so that it reaches to more people.

February 29, 2024 12:41 PM

Subscriptions

Planetorium