Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

2025

What do all the stars and daggers after the book titles mean?


Note to self, for this year: Read less, write more notes. Abandon more books.

January

  1. Murder at the Vicarage, Agatha Christie*
  2. The Body in the Library, Agatha Christie*
  3. The Moving Finger, Agatha Christie*
  4. Sleeping Murder, Agatha Christie*
  5. A Murder Is Announced, Agatha Christie*
  6. They Do It with Mirrors, Agatha Christie*
  7. My Horrible Career, John Arundel*
  8. The Veiled Lodger, Sherlock & Co. Podcast*
  9. Hardcore History, Mania for Subjugation II, Episode 72*
  10. A Pocket Full of Rye, Agatha Christie*
  11. 4.50 from Paddington, Agatha Christie*
  12. The Mirror Crack’d From Side to Side, Agatha Christie*
  13. As You Wish: Inconceivable Tales from the Making of The Princess Bride, Cary Elwes & Joe Layden*
  14. A Caribbean Mystery, Agatha Christie*
  15. At Bertram’s Hotel, Agatha Christie*
  16. Nemesis, Agatha Christie*
  17. Miss Marple’s Final Cases, Agatha Christie*

February

  1. A Shadow in Summer, Daniel Abraham*
  2. Black Peter, Sherlock & Co. Podcast, Season 25*
  3. On Writing with Brandon Sanderson, Episodes 1-4, Brandon Sanderson*
  4. A Betrayal in Winter, Daniel Abraham*
  5. I Will Judge You by Your Bookshelf, Grant Snider*
  6. The Art of Living, Grant Snider*
  7. The Shape of Ideas, Grant Snider*
  8. For the Love of Go, John Arundel*
  9. Powerful Command-Line Applications in Go, Ricardo Gerardi*
  10. Learning Go, Jon Bodner*
  11. An Autumn War, Daniel Abraham*
  12. The Price of Spring, Daniel Abraham*
  13. Math for English Majors, Ben Orlin (Notes)*
  14. Empire Podcast, The Three Kings, Episodes 212–214*#

March

  1. Companion to the Count, Melissa Kendall*
  2. Wisteria Lodge, Sherlock & Co. Podcast, Season 26*
  3. A Story of Love, Minerva Spencer*
  4. The Etiquette of Love, Minerva Spencer*
  5. A Very Bellamy Christmas, Minerva Spencer*
  6. Empire Podcast, The Rise and Fall of the Mughal Empire, Episodes 205–211, 215-222*#
  7. Empire Podcast, Britain’s Last Colony, Episodes 229-230*#
  8. Head First Java (3rd edition), Kathy Sierra, Bert Bates & Trisha Gee*
  9. Head First Go, Jay McGavren*
  10. The Rest is History, The French Revolution (Part II), Episodes 503–507*#
  11. The Rest is History, The French Revolution (Part III), Episodes 544-547*#
  12. Morris Chang & TSMC, Spring 2025, Episode 1, Acquired Podcast*#
  13. Rolex, Spring 2025, Episode 2, Acquired Podcast*#
  14. Head First C, Dawn Griffiths & David Griffiths*

April

  1. Head First Learn to Code, Eric Freeman*
  2. On Writing with Brandon Sanderson, Episodes 4.5-8, Brandon Sanderson*
  3. Spellfire Thief, Sarah Hawke*
  4. Thinking About Thinking, Grant Snider*
  5. The Disappearance of Lady Frances Carfax, Sherlock & Co. Podcast, Season 28*
  6. Deep Questions, Cal Newport, Episodes 01-10*#
  7. Deep Questions, Cal Newport, Episodes 11-20*#
  8. Unlovable, Darren Hayes*

May

  1. Deep Questions, Cal Newport, Episodes 21-30*#
  2. Dick Barton and the Secret Weapon, Edward J Mason*#
  3. Dick Barton and the Paris Adventure, Edward J Mason*#
  4. Dick Barton and the Cabatolin Diamonds, Edward J Mason*#
  5. Kill the Pharaoh, Victor Pemberton*#
  6. On Writing with Brandon Sanderson, Episodes 8-12, Brandon Sanderson*
  7. Deep Questions, Cal Newport, Episodes 31-40*#
  8. Trial & Error (The Hardy Boys), Franklin W. Dixon
  9. Understanding APIs and RESTful APIs Crash Course, Kalob Taulien (Udemy)*#
  10. System Collapse, Martha Wells*
  11. Deep Questions, Cal Newport, Episodes 31-40*#
  12. A Sham Engagement, Fil Reid
  13. A Hint of Scandar, Fil Reid
  14. Gideon the Ninth, Tamsyn Muir*#
  15. Deep Questions, Cal Newport, Episodes 41-50*#
  16. Deep Questions, Cal Newport, Episodes 51-60*#
  17. Harrow the Ninth, Tamsyn Muir*#

June

  1. Deep Questions, Cal Newport, Episodes 61-70*#
  2. Deep Questions, Cal Newport, Episodes 71-80*#
  3. Apple in China, Patrick McGee*
  4. Dreaming of Elisabeth, Camilla Lackberg*
  5. An Elegant Death, Camilla Lackberg*
  6. Steve Ballmer, Summer 2025, Episode 1, Acquired Podcast#
  7. Deep Questions, Cal Newport, Episodes 81-90*#
  8. The Rest is History, Warlords of the West, The Rise and Fall of the Franks, Episodes 520-525*#
  9. The Rest is History, Heart of Darkness, Horror in the Congo, Episodes 538-541*#
  10. Antifragile, Nassim Nicholas Taleb*
  11. A Man and a Woman, Robin Schone*
  12. Deep Questions, Cal Newport, Episodes 91-100*#
  13. A Scandal in Bohemia, Sherlock & Co. Podcast, Season 30*
  14. How to Read a Book, Mortimer J. Adler*
  15. The Secret Rules of the Terminal, Julia Evans*
  16. The Lover, Robin Schone
  17. Slide:ology, Nancy Duarte*

July

  1. Deep Questions, Cal Newport, Episodes 101-110*#
  2. Deep Questions, Cal Newport, Episodes 111-120*#
  3. The Rest is History, 1066: The Norman Conquest of England, Episodes 548-557*#
  4. Emacs Writing Studio, Peter Prevos*
  5. Deep Questions, Cal Newport, Episodes 121-130*#
  6. Empire Podcast, The History of Ireland, Episodes 231-246*#
  7. The Priory School, Sherlock & Co. Podcast, Season 32*
  8. The Adventures of Johnny Bunko: The Last Career Guide You’ll Ever Need, Daniel H. Pink*
  9. The Sketchnote Handbook, Mike Rohde*
  10. Business Etiquette, Ann Marie Sabath*
  11. Dare to Tempt an Earl This Spring, Sara Adrien & Tanya Wilde*
  12. How to Lose Prince This Summer, Sara Adrien & Tanya Wilde*
  13. Empire Podcast, Victorian Narcos (The Opium Wars), Episodes 248-255*#
  14. Deep Questions, Cal Newport, Episodes 131-140*#
  15. Lost Islamic History, Firas Alkhateeb*
  16. Empire Podcast, Canada, Episodes 267-272*#
  17. Deep Questions, Cal Newport, Episodes 141-150*#
  18. 100 Tricks to Appear Smart in Meetings, Sarah Cooper*

August

  1. Empire Podcast, The Panama Canal, Episodes 273-277*#
  2. Deep Questions, Cal Newport, Episodes 151-160*#
  3. Dick Barton and the Smash and Grab Raiders, Edward J Mason*#

July 31, 2025 06:30 PM

Joy of automation

After 145+ commits spread over multiple PRs, 450+ conversations and feedback, and accountable communication via several different communication mediums spanning over 2 years, the Ansible Release Management is finally completely automated, using GitHub Actions. When I joined Red Hat in November 2022, I was tasked with releasing the Ansible Community Package.

The first hurdle I faced was that there was no documented release process. What we had were release managers&apos private notes. It was over in personal repositories, internal Red Hat Google Docs, and personal code. Since all those past release managers left the organization (apart from one), it was very difficult to gather and figure out what, why, and how the release process worked. I had one supporter, my trainer (the then-release manager), Christian. He shared with me his notes and the steps he followed. He guided me on how he did the release.

Now we have a community release managers working group where contributors from the community also take part and release Ansible. And we have the two aforementioned GitHub actions.

  • First one builds the package and also opens a PR to the repository, and then waits for human input.
  • Meanwhile, the release manager can use the second action to create another PR to the Ansible documentation repository from the updated porting guide from the first PR.
  • After the PRs are approved, the release manager can continue with the first action and release the Ansible wheel package and the source tarball to PyPI in a fully automated way using trusted publishing.

I would like to thank Felix, Gotmax and Sviatoslav for feedback during the journey, thank you.

Many say automation is bad. In many companies, management gets the wrong idea that, when good automation is in place, they can fire senior engineers and get interns or inexperienced people to get the job done. That works till something breaks down. The value of experience comes when we have to fix things in automation. Automation enables new folks to get introduced to things, and enables experienced folks to work on other things.

by Anwesha Das at July 27, 2025 10:42 PM

Arrow Function vs Regular Function in JavaScript

Arrow Function vs Regular Function in JavaScript

Yeah, everyone already knows the syntax is different. No need to waste time on that.

Let’s look at what actually matters — how they behave differently.


1. arguments

Regular functions come with this built-in thing called arguments object. Even if you don’t define any parameters, you can still access whatever got passed when the function was called.

Arrow functions? Nope. No arguments object. Try using it, and it’ll just throw an error.

Regular function:

function test() {
  console.log(arguments);
}

test(1, "hello world", true); 
// o/p
// { '0': 1, '1': 'hello world', '2': true }

Arrow function:

const test = () => {
  console.log(arguments); 
};

test(1, "hello world", true); // Throws ReferenceError

2. return

Arrow functions have implicit return but regular functions don't. i.e We can return the result automatically if we write it in a single line , inside a parenthesis in arrow functions. Regular functions always require the return keyword.

Regular function:

function add(a, b) {
 const c = a + b;
}

console.log(add(5, 10)); // o/p : undefined 

Arrow function:

const add = (a, b) => (a + b);

console.log(add(5, 10)); // o/p : 15

3. this

Arrow functions do not have their own this binding. Instead, they lexically inherit this from the surrounding (parent) scope at the time of definition. This means the value of this inside an arrow function is fixed and cannot be changed using .call(), .apply(), or .bind().

Regular functions, on the other hand, have dynamic this binding — it depends on how the function is invoked. When called as a method, this refers to the object; when called standalone, this can be undefined (in strict mode) or refer to the global object (in non-strict mode).

Because of this behavior, arrow functions are commonly used in cases where you want to preserve the outer this context, such as in callbacks or within class methods that rely on this from the class instance.

Regular function :

const obj = {
  name: "Titas",
  sayHi: function () {
    console.log(this.name);
  }
};

obj.sayHi(); // o/p : Titas

Arrow function :

const obj = {
  name: "Titas",
  sayHi: () => {
    console.log(this.name);
  }
};

obj.sayHi(); // o/p :  undefined

print("Titas signing out !")
July 23, 2025 07:20 PM

Debugging maxlocksper_transaction: A Journey into Pytest Parallelism

So I was fixing some slow tests, and whenever I ran them through the pytest command, I was greeted with the dreaded max_locks_per_transaction error.

My first instinct? Just crank up the max_locks_per_transaction from 64 to 1024.

But... that didn’t feel right. I recreate my DB frequently, which means I’d have to set that value again and again. It felt like a hacky workaround rather than a proper solution.

Then, like any developer, I started digging around — first checking the Confluence page for dev docs to see if anyone else had faced this issue. No luck. Then I moved to Slack, and that’s where I found this command someone had shared:

pytest -n=0

This was new to me. So, like any sane dev in 2025, I asked ChatGPT what this was about. That’s how I came across pytest-xdist.

What is pytest-xdist?

The pytest-xdist plugin extends pytest with new test execution modes — the most common one is distributing tests across multiple CPUs to speed up test execution.

What does pytest-xdist do?

Runs tests in parallel using <numprocesses> workers (Python processes), which is a game changer when: – You have a large test suite
– Each test takes a significant amount of time
– Your tests are independent (i.e., no shared global state)


That’s pretty much it — I plugged in pytest -n=0 and boom, no more transaction locks errors.

Cheers!

References – https://pytest-xdist.readthedocs.io/en/stable/https://docs.pytest.org/en/stable/reference/reference.html

#pytest #Python #chatgpt #debugging

July 16, 2025 05:07 PM

Observations as I Learn to Read, 2

I finally got done transferring all my highlights and thoughts and notes from Adler into a notes file on the computer. This was my first, really large notes file, filled with heirarchies of headlines and tags and Org Mode handled it like a champ!

It took me five times as long to transfer and organise the file, as it took me to read the book. This was very painful!
I am writing this down, so that my future self remembers!
Very, very, painful!

So I need to be intentional about books that I need to analytically read and change my reading habits.

  1. Have a one to one ratio for notes vs reading.
    This means for every ten minutes I want to read, I need to allocate at least the same amount of time for bringing my notes in.
  2. All of the above, has to be a single session.
    So reading, followed by a bit of reflection, followed by transferring my notes and annotations out to Org Roam. So my old hour of reading is now 30m of reading, followed by 30m of thinking and jotting my thoughts down.
  3. Cross my fingers, and hope that it’ll all work out. Hope that something sane, something intelligible emerges in the end.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


July 08, 2025 12:25 PM

Creating Pull request with GitHub Action

---
name: Testing Gha
on:
  workflow_dispatch:
    inputs:
      GIT_BRANCH:
        description: The git branch to be worked on
        required: true

jobs:
  test-pr-creation:
    name: Creates test PR
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
      contents: write
    env:
      GIT_BRANCH: ${{ inputs.GIT_BRANCH }}
    steps:
      - uses: actions/checkout@v4
      - name: Updates README
        run: echo date >> README.md

      - name: Set up git
        run: |
          git switch --create "${GIT_BRANCH}"
          ACTOR_NAME="$(curl -s https://api.github.com/users/"${GITHUB_ACTOR}" | jq --raw-output &apos.name // .login&apos)"
          git config --global user.name "${ACTOR_NAME}"
          git config --global user.email "${GITHUB_ACTOR_ID}+${GITHUB_ACTOR}@users.noreply.github.com"

      - name: Add README
        run: git add README.md

      - name: Commit
        run: >-
          git diff-index --quiet HEAD ||
          git commit -m "test commit msg"
      - name: Push to the repo
        run: git push origin "${GIT_BRANCH}"

      - name: Create PR as draft
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >-
          gh pr create
          --draft
          --base main
          --head "${GIT_BRANCH}"
          --title "test commit msg"
          --body "pr body"

      - name: Retrieve the existing PR URL
        id: existing-pr
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >
          echo -n pull_request_url= >> "${GITHUB_OUTPUT}"

          gh pr view
          --json &aposurl&apos
          --jq &apos.url&apos
          --repo &apos${{ github.repository }}&apos
          &apos${{ env.GIT_BRANCH }}&apos
          >> "${GITHUB_OUTPUT}"
      - name: Select the actual PR URL
        id: pr
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >
          echo -n pull_request_url=
          >> "${GITHUB_OUTPUT}"

          echo &apos${{steps.existing-pr.outputs.pull_request_url}}&apos
          >> "${GITHUB_OUTPUT}"

      - name: Log the pull request details
        run: >-
           echo &aposPR URL: ${{ steps.pr.outputs.pull_request_url }}&apos | tee -a "${GITHUB_STEP_SUMMARY}"


      - name: Instruct the maintainers to trigger CI by undrafting the PR
        env:
          GITHUB_TOKEN: ${{ github.token }}
        run: >-
            gh pr comment
            --body &aposPlease mark the PR as ready for review to trigger PR checks.&apos
            --repo &apos${{ github.repository }}&apos
            &apos${{ steps.pr.outputs.pull_request_url }}&apos

The above is an example of how to create a draft PR via GitHub Actions. We need to give permissions to the GitHub action to create PR in a repository (workflow permissions in the settings).

workflow_permissions.png

Hopefully, this blogpost will help my future self.

by Anwesha Das at July 06, 2025 06:22 PM

Emacs Package Updation Checklist


I’ve never updated my Emacs packages until recently, because Emacs is where all my writing happens, and so I’m justifiably paranoid.
But then some packages stopped working, due to various circumstances1 and an update solved it.

So I’ve decided to update my packages once a quarter, so that I don’t lose days yak shaving when something goes wrong and I handle breakage on my terms and not the machine’s.

As far as package management goes, I want to keep things simple.
In fact, I still haven’t graduated to use-package or straight.el because my package needs are few and conservative2. And so, while there are automatic update options out there, I’ll just stick to updating them manually, every quarter.

Ergo, this is the checklist I’ll use next time onwards …

  1. Stop emacs user service, systemctl --user stop emacs
  2. Backup emacs folder in ~/.config
  3. Start emacs manually (not the service).
  4. M-x package-refresh-contents
  5. M-x package-upgrade-all
  6. Problems? Quit emacs. Revert backup folder.
  7. In the end, start emacs user sevice, systemctl --user start emacs

There’s an Org mode task, scheduled quarterly, so that I won’t forget.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. While I don’t want updated packages, I do want updated Emacs and that broke stuff 😂 ↩︎

  2. The biggest change I forsee, is if Jetbrains ever turn evil and I have to move off their editors and subsequently need to use Emacs as an IDE ↩︎

July 06, 2025 03:06 AM

How to Read a Book: 005, On Reading Speed


image courtesy, Simon & Schuster


This has never been a bugbear for me. A lifetime of reading has led me to read at a fairly fast clip. But Adler puts into specific words, what it is that I actually, subconsciously do. This will help me advise my younger friends, when I used to struggle earlier, with a “Just keep at it”

Below followeth Adler’s advice …

  1. Great speed in reading is a dubious achievement; it is of value only if what you have to read is not really worth reading. A better formula is this:
    Every book should be read no more slowly than it deserves, and no more quickly than you can read it with satisfaction and comprehension. In any event, the speed at which they read, be it fast or slow, is but a fractional part of most people’s problem with reading
  2. The ideal is not merely to be able to read faster, but to be able to read at different speeds — and to know when the different speeds are appropriate.

So if that is the ideal, how do we go about increasing our speed if we are slow / irregular? Adles has an observation and a suggestion, that’ll take us most of the way there.

  1. The eyes of young or untrained readers “fixate” as many as five or six times in the course of each line that is read. (The eye is blind while it moves; it can only see when it stops.) Thus single words or at the most two-word or three-word phrases are being read at a time, in jumps across the line. Even worse than that, the eyes of incompetent readers regress as often as once every two or three lines—that is, they return to phrases or sentences previously read.
  2. Place your thumb and first two fingers together. Sweep this “pointer” across a line of type, a little faster than it is comfortable for your eyes to move. Force yourself to keep up with your hand. You will very soon be able to read the words as you follow your hand. Keep practicing this, and keep increasing the speed at which your hand moves, and before you know it you will have doubled or trebled your reading speed.

With a caveat however …

  • What exactly have you gained if you increase your reading speed significantly? It is true that you have saved time—but what about comprehension? Has that also increased, or has it suffered in the process?
    It is worth emphasizing, therefore, that it is precisely comprehension in reading that this book seeks to improve. You cannot comprehend a book without reading it analytically; analytical reading, as we have noted, is undertaken primarily for the sake of comprehension (or understanding).


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


July 04, 2025 11:30 PM

Joseph Conrad Forsees Aaron Swartz and the AI Bandits


via PRH NZ


Been listening to Joseph Conrad’s, Heart of Darkness in the little cracks of time in the day1 and as always being led to the sad and inevitable conclusion that we always fail to learn from what came before.

It was as unreal as everything else—as the philanthropic pretence of the whole concern, as their talk, as their government, as their show of work. The only real feeling was a desire to get appointed to a trading-post where ivory was to be had, so that they could earn percentages. They intrigued and slandered and hated each other only on that account—but as to effectually lifting a little finger—oh, no.

By heavens! there is something after all in the world allowing one man to steal a horse while another must not look at a halter. Steal a horse straight out. Very well. He has done it. Perhaps he can ride. But there is a way of looking at a halter that would provoke the most charitable of saints into a kick.

Aaron Swartz had to give his life for his beliefs, yet when the robber barons thieve everything is suddenly alright.
Greed, like love, never dies!


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. after hearing it being heavily referred to, in The Rest is History’s episodes on the rape, pillage and exploitation of the Congo (Episodes 538-541) ↩︎

July 04, 2025 02:47 AM

ChatGPT and Images

I’ve been working on a few side projects and using ChatGPT for ideation and brainstorming around ideas and features for the MVP. As part of this, I needed a logo for my app. Naturally, I turned to AI to help me generate one.

However, I noticed that when generating images, ChatGPT doesn’t always follow the guidelines perfectly. Each time I asked for a new version, it would create a completely different image, which made it difficult to iterate or make small tweaks.

But I found a better way.

Instead of generating a brand new image every time, I first explained my app idea and the name. ChatGPT generated an image I liked.

So I asked ChatGPT to generate the JSON for the image instead. I then manually tweaked the JSON file to adjust things exactly the way I wanted. When I asked ChatGPT to generate the image based on the updated JSON, it finally created the image as per my request — no random changes, just the specific adjustments I needed.

Exploration Phase

SplitX logo

{
  "image": {
    "file_name": "splitX_icon_with_text.png",
    "background_color": "black",
    "elements": [
      {
        "type": "text",
        "content": "SplitX",
        "font_style": "bold",
        "font_color": "white",
        "position": "center",
        "font_size": "large"
      },
      {
        "type": "shape",
        "shape_type": "X",
        "style": "geometric split",
        "colors": [
          {
            "section": "top-left",
            "gradient": ["#FF4E50", "#F9D423"]
          },
          {
            "section": "bottom-left",
            "gradient": ["#F9D423", "#FC913A"]
          },
          {
            "section": "top-right",
            "gradient": ["#24C6DC", "#514A9D"]
          },
          {
            "section": "bottom-right",
            "gradient": ["#514A9D", "#E55D87"]
          }
        ],
        "position": "center behind text",
        "style_notes": "Each quadrant of the X has a distinct gradient, giving a modern and vibrant look. The X is split visually in the middle, aligning with the 'Split' theme."
      }
    ]
  }
}

Final Design

SplitX logo Updated JSON

{
  "image": {
    "file_name": "splitX_icon_with_text.png",
    "background_color": "transparent",
    "elements": [
      {
        "type": "shape",
        "shape_type": "X",
        "style": "geometric split",
        "colors": [
          {
            "section": "top-left",
            "gradient": [
              "#FF4E50",
              "#F9D423"
            ]
          },
          {
            "section": "bottom-left",
            "gradient": [
              "#F9D423",
              "#FC913A"
            ]
          },
          {
            "section": "top-right",
            "gradient": [
              "#24C6DC",
              "#514A9D"
            ]
          },
          {
            "section": "bottom-right",
            "gradient": [
              "#514A9D",
              "#E55D87"
            ]
          }
        ],
        "position": "center ",
        "style_notes": "Each quadrant of the X has a distinct gradient, giving a modern and vibrant look. The X is split visually in the middle, aligning with the 'Split' theme."
      }
    ]
  }
}

If you want to tweak or refine an image, first generate the JSON, make your changes there, and then ask ChatGPT to generate the image using your updated JSON. This gives you much more control over the final result.

Cheers!

P.S. Feel free to check out the app — it's live now at https://splitx.org/. Would love to hear what you think!

July 03, 2025 01:28 PM

How I understood the importance of FOSS i.e Free and Open Source Software

Hello people of the world wide web.
I'm Titas, a CS freshman trying to learn programming and build some cool stuff. Here's how I understood the importance of open source.

The first time I heard of open source was about 3 years ago in a YouTube video, but I didn't think much of it.
Read about it more and more on Reddit and in articles.

Fast forward to after high school — I'd failed JEE and had no chance of getting into a top engineering college. So I started looking at other options, found a degree and said to myself:
Okay, I can go here. I already know some Java and writing code is kinda fun (I only knew basics and had built a small game copying every keystroke of a YouTube tutorial).
So I thought I could learn programming, get a job, and make enough to pay my bills and have fun building stuff.

Then I tried to find out what I should learn and do.
Being a fool, I didn't look at articles or blog posts — I went to Indian YouTube channels.
And there was the usual advice: Do DSA & Algorithms, learn Web Development, and get into FAANG.

I personally never had the burning desire to work for lizard man, but the big thumbnails with “200k”, “300k”, “50 LPA” pulled me in.
I must’ve watched 100+ videos like that.
Found good creators too like Theo, Primeagen, etc.

So I decided I'm going to learn DSA.
First, I needed to polish my Java skills again.
Pulled out my old notebook and some YT tutorials, revised stuff, and started learning DSA.

It was very hard.
Leetcode problems weren't easy — I was sitting for hours just to solve a single problem.
3 months passed by — I learnt arrays, strings, linked lists, searching, sorting till now.
But solving Leetcode problems wasn't entertaining or fun.
I used to think — why should I solve these abstract problems if I want to work in FAANG (which I don't even know if I want)?

Then I thought — let's learn some development.
Procrastinated on learning DSA, and picked up web dev — because the internet said so.
Learnt HTML and CSS in about 2-3 weeks through tutorials, FreeCodeCamp, and some practice.

Started learning JavaScript — it's great.
Could see my output in the browser instantly.
Much easier than C ,which is in my college curriculum (though I had fun writing C).

Started exploring more about open source on YouTube and Reddit.
Watched long podcasts to understand what it's all about.
Learnt about OSS — what it is, about Stallman, GNU, FOSS.
OSS felt like an amazing idea — people building software and letting others use it for free because they feel like it.
The community aspect of it.
Understood why it's stupid to have everything under control of a capitalist company — who can just one day decide to stop letting you use your own software that you paid for.

Now I’m 7 months into college, already done with sem 1, scored decent marks.
I enjoy writing code but haven't done anything substantial.
So I thought to ask for some help. But who to ask?

I remembered a I've heard about this distant cousin Kushal who lives in Europe and has done some great software and my mother mentioned him like he was some kind of a genius .I once had a brief conversation with him via text regarding if I should take admission in BCA than an engineering degree, and his advice gave me some motivation and positivity . He said:

“BCA or Btech will for sure gets a job faster Than tradional studying If you can put in hours, that is way more important than IQ.
I have very average IQ but I just contributed to many projects.”

So 7 months later, I decided to text him again — and surprisingly, he replied and agreed to talk with me on a call.
Spoke with him for 45 odd minutes and asked a bunch of questions about software engineering, his work, OSS, etc.

Had much better clarity after talking with him.
He gave me the dgplug summer training docs and a Linux book he wrote.

So I started reading the training docs.

  • Step 0: Install a Linux distro → already have it ✅
  • Step 1: Learn touch typing → already know it ✅

Kept reading the training docs.
Read a few blog posts on the history of open source — already knew most of the stuff but learnt some key details.

Read a post by Anwesha on her experience with hacking culture and OSS as a lawyer turned software engineer — found it very intriguing.

Then watched the documentaries Internet's Own Boy and Coded Bias.
Learnt much more about Aaron Swartz than I knew — I only knew he co-founded Reddit and unalived himself after getting caught trying to open-source the MIT archives.

Now I had a deeper understanding of OSS and the culture.
But I had a big question about RMS — why was he so fixated on the freedom to hack and change stuff in the software he owned?
(Yes, the Free in FOSS doesn’t stand for free of cost — it stands for freedom.)

I thought free of cost makes sense — but why should someone have the right to make changes in a paid software?
Couldn't figure it out.
Focused on JS again — also, end-semester exams were coming.
My university has 3 sets of internal exams before the end-semester written exams. Got busy with that.

Kept writing some JS in my spare time.
Then during my exams...

It was 3:37 am, 5 June. I had my Statistics exam that morning.
I was done with studying, so I was procrastinating — watching random YouTube videos.
Then this video caught my attention:
How John Deere Steals Farmers of $4 Billion a Year

It went deep into how John Deere installs software into their tractors to stop farmers and mechanics from repairing their own machines.
Only authorized John Deere personnel with special software could do repairs.
Farmers were forced to pay extra, wait longer, and weren’t allowed to fix their own property.

Turns out, you don’t actually buy the tractor — you buy a subscription to use it.
Even BMW, GM, etc. make it nearly impossible to repair their cars.
You need proprietary software just to do an oil change.

Car makers won’t sell the software to these business owners, BUT they'll offer 7500$/year subscriptions to use their software. One auto shop owner explained how he has to pay $50,000/year in subscriptions just to keep his business running.

These monopolies are killing small businesses.

It’s not just India — billion-dollar companies everywhere are hell-bent on controlling everything.
They want us peasants to rent every basic necessity — to control us.

And that night, at 4:15 AM, I understood:

OSS is not just about convenience.
It’s not just for watching movies with better audio or downloading free pictures for my college projects.
It’s a political movement — against control.
It’s about the right to exist, and the freedom to speak, share, and repair.


That's about it. I'm not a great writer — it's my first blog post.

Next steps?
Learn to navigate IRC.
Get better at writing backends in Node.js.
And I'll keep writing my opinions, experiences, and learnings — with progressively better English.

print("titas signing out , post '0'!")
June 23, 2025 07:55 AM

go workspaces usage in kubernetes project

Kubernetes (k/k) recently started using Go Workspaces to manage its multi-module setup more effectively.

The goal is to helps keep things consistent across the many staging repositories and the main k/k project.

At the root of the k/k repo, you’ll find go.work and go.work.sum.

These files are auto-generated by the hack/update-vendor.sh script.

Specifically, the logic lives around lines 198–219 and 293–303.

At a high level, the script runs the following commands from the root of the repo:

go work init
go work edit -go 1.24.0 -godebug default=go1.24
go work edit -use .

git ls-files -z ':(glob)./staging/src/k8s.io/*/go.mod' \
  | xargs -0 -n1 dirname -z \
  | xargs -0 -n1 go work edit -use

go mod download
go work vendor

This creates:

go 1.24.0

godebug default=go1.24

use (
    .
    ./staging/src/k8s.io/api
    ./staging/src/k8s.io/apiextensions-apiserver
    ./staging/src/k8s.io/apimachinery
    ...
    ./staging/src/k8s.io/sample-controller
)
  • go.work.sum file that tracks checksums for the workspace modules — like go.sum, but at the workspace level.

  • and the vendor/ directory which is populated via go work vendor and collects all dependencies from the workspace modules.

June 19, 2025 12:00 AM

OpenSSL legacy and JDK 21

openssl logo

While updating the Edusign validator to a newer version, I had to build the image with JDK 21 (which is there in Debian Sid). And while the application starts, it fails to read the TLS keystore file with a specific error:

... 13 common frames omitted
Caused by: java.lang.IllegalStateException: Could not load store from '/tmp/demo.edusign.sunet.se.p12'
at org.springframework.boot.ssl.jks.JksSslStoreBundle.loadKeyStore(JksSslStoreBundle.java:140) ~[spring-boot-3.4.4.jar!/:3.4.4]
at org.springframework.boot.ssl.jks.JksSslStoreBundle.createKeyStore(JksSslStoreBundle.java:107) ~[spring-boot-3.4.4.jar!/:3.4.4]
... 25 common frames omitted
Caused by: java.io.IOException: keystore password was incorrect
at java.base/sun.security.pkcs12.PKCS12KeyStore.engineLoad(PKCS12KeyStore.java:2097) ~[na:na]
at java.base/sun.security.util.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:228) ~[na:na]
at java.base/java.security.KeyStore.load(KeyStore.java:1500) ~[na:na]
at org.springframework.boot.ssl.jks.JksSslStoreBundle.loadKeyStore(JksSslStoreBundle.java:136) ~[spring-boot-3.4.4.jar!/:3.4.4]
... 26 common frames omitted
Caused by: java.security.UnrecoverableKeyException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
... 30 common frames omitted

I understood that somehow it is not being able to read file due to bad passphrase. But, the same file with same passphrase can be opened by the older version of the application (in the older containers).

After spending too many hours reading, I finally found the trouble. The openssl was using too new algorithm. By default it will use AES_256_CBC for encryption and PBKDF2 for key derivation. But, if we pass -legacy to the openssl pkcs12 -export command, then it using RC2_CBC or 3DES_CBC for certificate encryption depening if RC2 cipher is enabled.

This finally solved the issue and the container started cleanly.

June 04, 2025 02:06 PM

PyCon Lithuania, 2025

Each year, I try to experience a new PyCon. 2025, PyCon Lithuania was added to my PyCon calendar.

pyon_lt_6.jpg

Day before the conference

What makes this PyCon, is that we were traveling there as a family and the conference days coincided with the Easter holidays. We utilized that to explore the city—the ancient cathedrals, palaces, old cafes, and of course the Lithuanian cuisine. Šaltibarščiai, Balandeliai and Cepelinai.

Tuesday

22nd, the day before the conference was all about practicing the talk and meeting with the community. We had the pre-conference mingling session with the speakers and volunteers. It was time to meet some old and many new people. Then it was time for PyLadies. Inga from PyLadies Lithuania, Nina from Pyladies London and I had a lovely dinner discussion—good food with the PyLadies community,technology, and us.

pyon_lt_2.jpg

Wednesday

The morning started early for us on the day of the conference. All the 3 of us had different responsibilities during the conference. While Py was volunteering, I talked and Kushal was the morning keynoter A Python family in a true sense :)

pyon_lt_1.jpg

I had my talk, “Using PyPI Trusted Publishing to Ansible Release” scheduled for the afternoon session. The talk was about automating the Ansible Community package release process with GitHub action using the trusted publisher in PyPI. The talk described - what is trusted publishing.I explanined the need for it and the usage of trusted publishing. I explained the Ansible manual release process in a nutshell and then moved to what the Ansible release process is now with  GitHub actions and Trusted Publishing. Then the most important part is, the lessons learned in the process and how other open-source communities can get help and benefit from it.Here is the link for the slides of my talk I had questions regarding trusted publishing, experience as a release manager, and of course Ansible.

pyon_lt_0.jpeg

It was the time to bid goodbye to PyCon Lt and come back home. See you next year. Congratulatios organizers for doing a great job in organizing the coference.

pyon_lt_4.jpg

by Anwesha Das at April 30, 2025 10:49 AM

go_modules: how to create go vendor tarballs from subdirectories

The go_modules OBS service is used to download, verify, and vendor Go module dependency sources.

As described in the source project’s (obs-service-go_modules) README:

Using the go.mod and go.sum files present in a Go application, obs-service-go_modules will call Go tools in sequence:

  • go mod download
  • go mod verify
  • go mod vendor

obs-service-go_modules then creates a vendor.tar.gz archive (or another supported compression format) containing the vendor/ directory generated by go mod vendor.
This archive is produced in the RPM package directory and can be committed to OBS to support offline Go application builds for openSUSE, SUSE, and various other distributions.

The README also provides a few usage examples for packagers.

However, it wasn’t immediately clear how to use the go_modules OBS service to create multiple vendor tarballs from different subdirectories within a single Git source repository.

Below is an example where I create multiple vendor tarballs from a single Git repo— (in this case, the etcd project):

<services>

  <!-- Service #1 -->
  <service name="obs_scm">
    <param name="url">https://github.com/etcd/etcd.git</param>
    <param name="scm">git</param>
    <param name="package-meta">yes</param>
    <param name="versionformat">@PARENT_TAG@</param>
    <param name="versionrewrite-pattern">v(.*)</param>
    <param name="revision">v3.5.21</param>
    <param name="without-version">yes</param>
  </service>

  <!-- Service #2 -->
  <service name="go_modules">
    <param name="archive">*etcd.obscpio</param>
  </service>

  <!-- Service #3 -->
  <service name="go_modules">
    <param name="archive">*etcd.obscpio</param>
    <param name="subdir">server</param>
    <param name="vendorname">vendor-server</param>
  </service>

  <!-- Service #4 -->
  <service name="go_modules">
    <param name="archive">*etcd.obscpio</param>
    <param name="subdir">etcdctl</param>
    <param name="vendorname">vendor-etcdctl</param>
  </service>

</services>

image

The above _service file defines four services:

  • Service 1 clones the GitHub repo github.com/etcd/etcd.git into the build root. The resulting output is a cpio archive blob—etcd.cpio.

  • Service 2 locates the etcd.cpio archive, extracts it, runs go mod download, go mod verify, and go mod vendor from the repo root, and creates the default vendor.tar.gz.

  • Service 3 and Service 4 work the same as Service 2, with one difference: they run the Go module commands from subdirectories:

    • Service 3 changes into the server/ directory before running the Go commands, producing a tarball named vendor-server.tar.gz.
    • Service 4 does the same for the etcdctl/ directory, producing vendor-etcdctl.tar.gz.

🔍 Note the subdir and vendorname parameters. These are the key to generating multiple vendor tarballs from various subdirectories, with custom names.


I found the full list of parameters accepted by the go_modules service defined here1:

...
parser.add_argument("--strategy", default="vendor")
parser.add_argument("--archive")
parser.add_argument("--outdir")
parser.add_argument("--compression", default=DEFAULT_COMPRESSION)
parser.add_argument("--basename")
parser.add_argument("--vendorname", default=DEFAULT_VENDOR_STEM)
parser.add_argument("--subdir")
...

The default values are defined here2:

DEFAULT_COMPRESSION = "gz"
DEFAULT_VENDOR_STEM = "vendor"

Also, while writing this post, I discovered that the final vendor tarball can be compressed in one of the following supported formats3:

.tar.bz2
.tar.gz
.tar.lz
.tar.xz
.tar.zst

And finally, here’s the list of supported source archive formats (the blob from which the vendor tarball is created), powered by the libarchive Python module4:

READ_FORMATS = set((
    '7zip', 'all', 'ar', 'cab', 'cpio', 'empty', 'iso9660', 'lha', 'mtree',
    'rar', 'raw', 'tar', 'xar', 'zip', 'warc'
))

  1. https://github.com/openSUSE/obs-service-go_modules/blob/a9bf055557cf024478744fbd7e8621fd03cb2e87/go_modules#L227-L233 

  2. https://github.com/openSUSE/obs-service-go_modules/blob/a9bf055557cf024478744fbd7e8621fd03cb2e87/go_modules#L46C1-L47C31 

  3. https://github.com/openSUSE/obs-service-go_modules/blob/a9bf055557cf024478744fbd7e8621fd03cb2e87/go_modules#L119-L124 

  4. https://github.com/Changaco/python-libarchive-c/blob/1a5b505ab1818686c488b4904445133bcc86fb4d/libarchive/ffi.py#L243-L246 

April 10, 2025 12:00 AM

Blog Questions Challenge 2025

1. Why did you make the blog in the first place?

This blog initially started as part of the summer training by DGPLUG, where the good folks emphasize the importance of blogging and encourage everyone to write—about anything! That motivation got me into the habit, and I’ve been blogging on and off ever since.

2. What platform are you using to manage your blog and why did you choose it?

I primarily write on WriteFreely, hosted by Kushal, who was kind enough to host an instance. I also occasionally write on my WordPress blog. So yeah, I have two blogs.

3. Have you blogged on other platforms before?

I started with WordPress because it was a simple and fast way to get started. Even now, I sometimes post there, but most of my recent posts have moved to the WriteFreely instance.

4. How do you write your posts?

I usually just sit down and write everything in one go. Followed by editing part—skimming through it once, making quick changes, and then hitting publish.

5. When do you feel most inspired to write?

Honestly, I don’t wait for inspiration. I write whenever I feel like it—sometimes in a diary, sometimes on my laptop. A few of those thoughts end up as blog posts, while the rest get lost in random notes and files.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

It depends. After reading a few books and articles on writing, I started following a simple process: finish a draft in one sitting, come back to it later for editing, and then publish.

7. Your favorite post on your blog?

Ahh! This blog post on Google Cloud IAM is one I really like because people told me it was well-written! :)

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

Nope! I like it as it is. Keeping it simple for now.

A big thanks to Jason for mentioning me in the challenge!

Cheers!

March 29, 2025 05:32 AM

Access data persisted in Etcd with etcdctl and kubectl

I created the following CRD (Custom Resource Definition) with — kubectl apply -f crd-with-x-validations.yaml:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  # name must be in the form: <plural>.<group>
  name: myapps.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: example.com
  scope: Namespaced
  names:
    # kind is normally the CamelCased singular type. 
    kind: MyApp
    # singular name to be used as an alias on the CLI
    singular: myapp
    # plural name in the URL: /apis/<group>/<version>/<plural>
    plural: myapps
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            x-kubernetes-validations: 
              - rule: "self.minReplicas <= self.maxReplicas"
                messageExpression: "'minReplicas (%d) cannot be larger than maxReplicas (%d)'.format([self.minReplicas, self.maxReplicas])"
            type: object
            properties:
              minReplicas:
                type: integer
              maxReplicas:
                type: integer

I want to check how the above CRD is persisted in Etcd.

I have two ways to do the job:

Option 1:

Use etcdctl to directly verify the persisted data in Etcd.1

My three steps process:

  • Exec inside the etcd pod in the kube-system namespace of your kubernetes cluster — kubectl exec -it -n kube-system etcd-kep-4595-cluster-control-plane -- /bin/sh
  • Create alias — alias e="etcdctl --endpoints 127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt"
  • Access the data — e get --prefix /registry/apiextensions.k8s.io/
sh-5.2# e get --prefix /registry/apiextensions.k8s.io/

/registry/apiextensions.k8s.io/customresourcedefinitions/shirts.stable.example.com
{"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1beta1","metadata":{"name":"shirts.stable.example.com","uid":"09696eb0-d58b-4a21-8820-b2230b13707e","generation":1,"creationTimestamp":"2025-02-21T12:38:19Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"shirts.stable.example.com\"},\"spec\":{\"group\":\"stable.example.com\",\"names\":{\"kind\":\"Shirt\",\"plural\":\"shirts\",\"shortNames\":[\"shrt\"],\"singular\":\"shirt\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"jsonPath\":\".spec.color\",\"name\":\"Fruit\",\"type\":\"string\"}],\"name\":\"v1\",\"schema\":{\"openAPIV3Schema\":{\"properties\":{\"spec\":{\"properties\":{\"color\":{\"type\":\"string\"},\"size\":{\"type\":\"string\"}},\"type\":\"object\"}},\"type\":\"object\"}},\"served\":true,\"storage\":true}]}}\n"},"managedFields":[{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:acceptedNames":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:conditions":{"k:{\"type\":\"Established\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamesAccepted\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:conversion":{".":{},"f:strategy":{}},"f:group":{},"f:names":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:scope":{},"f:versions":{}}}}]},"spec":{"group":"stable.example.com","version":"v1","names":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"scope":"Namespaced","validation":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"versions":[{"name":"v1","served":true,"storage":true}],"additionalPrinterColumns":[{"name":"Fruit","type":"string","JSONPath":".spec.color"}],"conversion":{"strategy":"None"},"preserveUnknownFields":false},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"storedVersions":["v1"]}}

Option 2:

Use kubectl to access the persisted data from Etcd –

kubectl get --raw /apis/apiextensions.k8s.io/v1/customresourcedefinitions/shirts.stable.example.com

> kubectl get --raw /apis/apiextensions.k8s.io/v1/customresourcedefinitions/shirts.stable.example.com

{"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1","metadata":{"name":"shirts.stable.example.com","uid":"09696eb0-d58b-4a21-8820-b2230b13707e","resourceVersion":"594","generation":1,"creationTimestamp":"2025-02-21T12:38:19Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"shirts.stable.example.com\"},\"spec\":{\"group\":\"stable.example.com\",\"names\":{\"kind\":\"Shirt\",\"plural\":\"shirts\",\"shortNames\":[\"shrt\"],\"singular\":\"shirt\"},\"scope\":\"Namespaced\",\"versions\":[{\"additionalPrinterColumns\":[{\"jsonPath\":\".spec.color\",\"name\":\"Fruit\",\"type\":\"string\"}],\"name\":\"v1\",\"schema\":{\"openAPIV3Schema\":{\"properties\":{\"spec\":{\"properties\":{\"color\":{\"type\":\"string\"},\"size\":{\"type\":\"string\"}},\"type\":\"object\"}},\"type\":\"object\"}},\"served\":true,\"storage\":true}]}}\n"},"managedFields":[{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:acceptedNames":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:conditions":{"k:{\"type\":\"Established\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"NamesAccepted\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"},{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"apiextensions.k8s.io/v1","time":"2025-02-21T12:38:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:conversion":{".":{},"f:strategy":{}},"f:group":{},"f:names":{"f:kind":{},"f:listKind":{},"f:plural":{},"f:shortNames":{},"f:singular":{}},"f:scope":{},"f:versions":{}}}}]},"spec":{"group":"stable.example.com","names":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"scope":"Namespaced","versions":[{"name":"v1","served":true,"storage":true,"schema":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"additionalPrinterColumns":[{"name":"Fruit","type":"string","jsonPath":".spec.color"}]}],"conversion":{"strategy":"None"}},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":"2025-02-21T12:38:19Z","reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"shirts","singular":"shirt","shortNames":["shrt"],"kind":"Shirt","listKind":"ShirtList"},"storedVersions":["v1"]}}


  1. I realised while I’m accessing the same CRD data with etcdctl and kubectl, I’m getting a few things different in my output. In case of etcdctl — I get (i) "version":"v1", (ii) the CRD schema is stored in field "validation":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}} and (iii) and there’s a top level additionalPrinterColumns. While in case of kubectl— I don’t get the above bits, and instead I get both, the schema and the additionalPrinterColumns stored in the versions array - "versions":[{"name":"v1","served":true,"storage":true,"schema":{"openAPIV3Schema":{"type":"object","properties":{"spec":{"type":"object","properties":{"color":{"type":"string"},"size":{"type":"string"}}}}}},"additionalPrinterColumns":[{"name":"Fruit","type":"string","jsonPath":".spec.color"}]}]. This is (maybe) something to do with how currently (as of writing) Kubernetes stores/persists CRD v1 as v1beta1 in Etcd, because v1 takes more space to represent the same CRD (due to denormalization of fields among multi-version CRDs) and we have CRDs in the wild that are already bumping against the max allowed size (Thank you, Jordan Liggit for explaining this.) Read this2 and this3 for some context. 

  2. code block, where the encoding version for CRDs is configured 

  3. Attempt to bump the storage version from v1beta1 → v1, but was blocked on k/k PR #82292 

February 25, 2025 12:00 AM

slabtop - to check kernel memory usage, and kubelet's container_memory_kernel_usage metrics

Today, I learnt about slabtop1, a command line utility to check the memory used by kernel
(or as its man page says, it displays the kernel slab cache information in real time2).

image


(Logging for the future me)

So, what led to me learning about slabtop? 🙂

Jason Braganza taught me this, while we were preparing for our upcoming conference talk (on Kubernetes Metrics)!

Precisely, following is the metrics (exposed by kubelet component within a kubernetes cluster) that led to the discussion.

# HELP container_memory_kernel_usage Size of kernel memory allocated in bytes.
# TYPE container_memory_kernel_usage gauge
container_memory_kernel_usage{container="",id="/",image="",name="",namespace="",pod=""} 0 1732452865827

And how kubelet gets this “kernel memory allocation” information and feeds to the container_memory_kernel_usage metrics?

The answer is (at least to the best of my understanding) –

The kubelet’s server package imports “github.com/google/cadvisor/metrics”3 (aka the cadvisor/metrics) module.

This cadvisor/metrics go module provides a NewPrometheusCollector() function (which kubelet uses here4).

The NewPrometheusCollector() function take includedMetrics as one of its many parameters.

  r.RawMustRegister(metrics.NewPrometheusCollector(prometheusHostAdapter{s.host}, containerPrometheusLabelsFunc(s.host), includedMetrics, clock.RealClock{}, cadvisorOpts))

And when this includedMetrics contains cadvisormetrics.MemoryUsageMetrics (which it does in the case in question, check here5),

	includedMetrics := cadvisormetrics.MetricSet{
		...
		cadvisormetrics.MemoryUsageMetrics:  struct{}{},
		...
	}

then NewPrometheusCollector() function exposes the container_memory_kernel_usage6 metrics.

func NewPrometheusCollector(i infoProvider, f ContainerLabelsFunc, includedMetrics container.MetricSet, now clock.Clock, opts v2.RequestOptions) *PrometheusCollector {
        ...
        ...
	if includedMetrics.Has(container.MemoryUsageMetrics) {
		c.containerMetrics = append(c.containerMetrics, []containerMetric{
		       ...
		       ...
		       {
				name:      "container_memory_kernel_usage",
				help:      "Size of kernel memory allocated in bytes.",
				valueType: prometheus.GaugeValue,
				getValues: func(s *info.ContainerStats) metricValues {
					return metricValues
				},
			},
        ...
        ...

And as we see above in the definition of container_memory_kernel_usage metrics,
the valueType is prometheus.Gaugevalue (so its a gauge type metrics),
and the value is value: float64(s.Memory.KernelUsage) where KernelUsage is defined here7 and interpreted here8.

I still feel I can go further down a few more steps to find out the true source of this information, but that’s all for now.


  1. which in turn gets the information from /proc/slabinfo

  2. to me it looks something like, top or htop

  3. here: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L39C2-L39C38 

  4. kubelet registring the cadvisor metrics provided by cdvisor’s metrics.NewPrometheusCollector(...) function – https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L463 

  5. Kubelet server package creating a (cadvisor metrics based) set of includedMetrics: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/pkg/kubelet/server/server.go#L441-L451 

  6. codeblock adding container_memory_kernel_usage metrics: https://github.com/kubernetes/kubernetes/blob/d92b99ea637ee67a5c925e5e628f5816a01162ac/vendor/github.com/google/cadvisor/metrics/prometheus.go#L371-L393 

  7. Cadvisor’s MemoryStats struct providing KernelUsage: https://github.com/google/cadvisor/blob/5bd422f9e1cea876ee9d550f2ed95916e1766f1a/info/v1/container.go#L430-L432 

  8. Cadvisor’s setMemoryStats() function, setting value for KernelUsage: https://github.com/google/cadvisor/blob/5bd422f9e1cea876ee9d550f2ed95916e1766f1a/container/libcontainer/handler.go#L799-L803 

February 24, 2025 12:00 AM

I quit everything

I quit everything

I quit everything

title: I quit everything
published: 2025-02-19

previous [‘Simple blogging engine’] parent directory

I have never been the social media type of person. But that doesn’t mean I don’t want to socialize and get/stay in contact with other people. So although not being a power-user, I always enjoyed building and using my online social network. I used to be online on ICQ basically all my computer time and I once had a rich Skype contact list.

However, ICQ just died because people went away to use other services. I remember how excited I was when WhatsApp became available. To me it was the perfect messenger; no easier way to get in contact and chat with your friends and family (or just people you somehow had in your address book), for free. All of those services I’ve ever been using followed one of two possible scenarios:

  • Either they died because people left for the bigger platform
  • or the bigger platform was bought and/or changed their terms of use to make any further use completely unjustifiable (at least for me)

Quitstory

  • 2011 I quit StudiVZ, a social network that I joined in 2006, when it was still exclusive for students. However, almost my whole bubble left for Facebook so to stay in contact I followed. RIP StudiVZ, we had a great time.
  • Also 2011 I quit Skype, when it was acquired by Microsoft. I was not too smart back then, but I already knew I wanted to avoid Microsoft. It wasn’t hard anyway, most friends had left already.
  • 2017 I quit Facebook. That did cost me about half of my connections to old school friends (or acquaintances) and remote relatives. But the terms of use (giving up all rights on any content to Facebook) and their practices (crawling all my connections to use their personal information against them) made it impossible for me to stay.
  • 2018 I quit WhatsApp. It was a hard decision because, as mentioned before, I was once so happy about this app’s existence, and I was using it as main communication channel with almost all friends and family. But 2014 WhatsApp was bought by Facebook. In 2016 it was revealed that Facebook was combining the data from messenger and Facebook platform for targeted advertising and announced changes on terms of use. For me it was not possible to continue using the app.
  • Also 2018 I quit Twitter. Much too late. It has been the platform that allowed the rise of an old orange fascist, gave him the stage he needed and did by far not enough against false information spreading like crazy. I didn’t need to wait for any whistle blowers to know that the recommendation algorithm was favoring hate speech and miss-information, to know that this platform was not good for my mental health, anyway. I’m glad though, I was gone before the takeover.
  • Also 2018 I quit my Google account. I was using it to run my Android phone, mainly. However, quitting Google never hurt me - syncing my contacts and calendars via cardDAV and calDAV has always been painless. Google circles (which I peeked into for a week or so) never became a think anyway. I started using custom roms (mainly Cyanogen, later lineage OS) for all my phones anyway.
  • 2020 I quit Amazon. Shopping is actually more fun again. I still do online shopping occasionally, most often trying to buy from the manufacturers directly, but if I can I try to do offline shopping in our beautiful city.
  • 2021 I quit smartphone. I just stopped using my phone for almost anything except making and receiving calls. I have tried a whole bunch of things to gain control over the device but found that it was impossible for me. I found that the device had in fact more control over me than vice versa; I had to quit.
  • 2024 I quit Paypal. It’s a shame that our banks cannot come up with a convenient solution, and it’s also a shame I helped to make that disgusting person who happens to own Paypal even richer.
  • Also in 2024 I quit Github. It’s the biggest code repository in the world. I’m sure it’s the biggest hoster of FOSS projects, too. Why? Why sell that to a company like Microsoft? I don’t want to have a Microsoft account. I had to quit.

Stopped using the smartphone

Implications

Call them as you may; big four, big five, GAFAM/FAAMG etc. I quit them all. They have a huge impact on our live, and I think it’s not for the better. They all have shown often enough, that they cannot be trusted; they gather and link all information about us they can lay hands on and use them against us, selling us out for the highest bidding (and the second and third highest, because copying digital data is cheap). I’m not regretting my decisions, but they were not without implications. And in fact I am quite pissed because I don’t think it is my fault that I had to quit. It is something that those big tech companies took from me.

  • I lost contact to a bunch of people. Maybe this is a FOMO kind of thing; it’s not that I was in contact with these distant relatives or acquaintances, but I had a low threshold of reaching out. Not so much, anymore.
  • People are reacting angrily if they find they cannot reach me. I am available via certain channels, but a lot of people don’t understand my reasoning to not join the big networks. As if I was trying to make their lives more complicated as necessary.
  • I can’t do OAuth. If online platforms don’t implement their own login and authentication but instead rely on identification via the big IdPs, I’m out. Means I will probably not be able to participate in Advent of Code this year. It’s kind of sad.
  • I’m the last to know. Not being in that WhatsApp group, and not reading the Signal message about the meeting cancellation 5 minutes before scheduled start (because I don’t have Signal on my phone), does have that effect. There has been a certain engagement once, when you agreed to something or scheduled a meeting etc. But these days, everything can be changed and cancelled just minutes before some appointment with a single text message. I feel old(fashioned) when trusting in others’ engagement, but I don’t want to give it up, yet.

Of course there is still potential to quit even more: I don’t have a Youtube account (of course) but I still watch videos there. I do have a Netflix subscription, and cancelling that would put me into serious trouble with my family. I’m also occasionally looking up locations on Google maps, but only if I want to look at the satellite pictures.

However, the web is becoming more and more bloated with ads and trackers, old pages that were fun to browse in the earlier days of the web have vanished; it’s not so much fun to use anymore. Maybe the HTTP/S will be the next thing for me to quit.

Conclusions

I’m still using the internet to read my news, to connect with friends and family and to sync and backup all the stuff that’s important to me. There are plenty of alternatives to big tech that I have found work really well for me. The recipe is almost always the same: If it’s open and distributed, it’s less likely to fall into the hands of tech oligarchs.

I’m using IRC, Matrix and Signal for messaging, daily. Of those, Signal may have the highest risk of disappointing me one day, but I do have faith. Hosting my own Nextcloud and Email servers has to date been a smooth and nice experience. Receiving my news via RSS and atom feeds gives me control over the sources I want to expose myself to, without being flooded with ads.

I have tried Mastodon and other Fediverse networks, but I was not able to move any of my friends there to make it actual fun. As mentioned, I’ve never been too much into social media, but I like(d) to see some vital signs of different people in my life from time to time. I will not do bluesky, as I cannot see how it differs from those big centralized platforms that have failed me.

It’s not a bad online-life and after some configuration it’s no harder to maintain than any social media account, too. I only wish it wouldn’t have been necessary for me to walk this path. The web could have developed much differently, and be an open and welcoming space for everyone today. Maybe we’ll get there someday.

February 19, 2025 12:00 AM

Simple blogging engine

Simple blogging engine

Simple blogging engine

title: Simple blogging engine
published: 2025-02-18

previous [‘Blog Questions Challenge 2025’] next [‘I quit everything’] parent directory

As mentioned in the previous post, I have been using several frameworks for blogging. But the threshold to overcome to begin and write new articles were always too hight to just get started. Additionally, I’m getting more and more annoyed by the internet, or specifically browsing the www via HTTP/S. It’s beginning to feel like hard work to not get tracked everywhere and to not support big tech and their fascist CEOs by using their services. That’s why I found the gemini protocol interesting ever since I got to know about it. I wrote about it before:

Gemini blog post

That’s why I decided to not go for HTTPS-first in my blog, but do gemini first. Although you’re probably reading this as the generated HTML or in your feed reader.

Low-threshold writing

To just get started, I’m now using my tmux session that is running 24/7 on my home server. It’s the session I open by default on all my devices, because it contains my messaging (IRC, Signal, Matrix) and news (RSS feeds). Not it also contains a neovim session that let’s me just push all my thoughts into text files easily and everywhere.

Agate

The format I write in is gemtext, a markup language that is even simpler as Markdown. Gemtext allows three different headings, links, lists, blockquotes and formatted text, and that’s it. And to make my life even easier, I only need to touch a file .directory-listing-ok to let agate create an autoindex of each directory, so I don’t have to take care about house-keeping and linking my articles too much. I just went with this scheme to make sure my posts appear in a correct order:

blog
└── 2025
    ├── 1
    │   └── index.gmi
    └── 2
        └── index.gmi

When pointed to a directory, agate will automatically serve the index.gmi if it finds one.

To serve the files in my gemlog, I just copy them as is, using rsync. If anyone would browse the gemini space I would be done at this point. I’m using agate, a gemini server written in Rust, to serve the static blog. Technically, gemini would allow more than that, using cgi to process requests and dynamically return responses, but simple is just fine.

The not-so-low publishing threshold

However, if I ever want any person to actually read this, sadly I will have to offer more than gemtext. Translating everything into HTML and compiling an atom.xml comes with some more challenges. Now I will need some metadata like title and date. For now I’m just going to add that as formatted text at the beginning of each file I want to publish. The advantage is, that I can filter out files I want to keep private this way. Using ripgrep I just find all files with the published directive and pipe them through my publishing script.

To generate the HTML, I’m going the route gemtext -> markdown -> html, in lack of better ideas. Gemtext to Markdown is trivial, I only need to format the links (using sed in my case). To generate the HTML I use pandoc, although it’s way too powerful and not-lightweight for this task. But I just like pandoc. I’m adding simple.css to I don’t have to fuddle around with any design questions.

Simplecss

I was looking for an atom feed generator, until I noticed how easily this file can be generated manually. Again, a little bit of ripgrep and bash leaves me with an atom.xml that I’m actually quite happy with.

The yak can be shaved until the end of the world

I hope I have put everything out of the way get started easily and quickly. I could configure the system until the end of time to make unimportant things look better, but I don’t want to fall into that trap (again). I’m going to publish my scripts to a public repository soon, in case anyone feels inspired to go a similar route.

February 18, 2025 12:00 AM

Blog Questions Challenge 2025

Blog Questions Challenge 2025

Blog Questions Challenge 2025

title: Blog Questions Challenge 2025
published: 2025-02-18
tags: blogging

previous [‘What is stopping us from using free software?’] next [‘Simple blogging engine’] parent directory

I’m picking up the challenge from Jason Braganza. If you haven’t, go visit his blog and subscribe to the newsletter ;)

Jason’s Blog

1. Why did you make the blog in the first place?

That’s been the first question I asked myself when starting this blog. It was part of the DGPLUG #summertraining and I kind of started without actually knowing what to do with it. But I did want to have my own little corner in cyberspace.

Why another blog?

2. What platform are you using to manage your blog and why did you choose it?

I have a home server running vim in a tmux session. The articles are written as gemtext as I have decided that my gemlog should be the source of truth for my blog. I’ve written some little bash scripts to convert everything to html and atom feed as well, but I’m actually not very motivated anymore to care for website design. Gemtext is the simplest markup language I know and to keep it simple makes the most sense to me.

Gemtext

3. Have you blogged on other platforms before?

I started writing on wordpress.com; without running my own server it has been the most accessible platform to me. When moving to my own infrastructure I used Lektor, a static website generator framework written in Python. It has been quite nice and powerful, but in the end I wanted to get rid of the extra dependencies and simplify even more.

Lektor

4. How do you write your posts?

Rarely. If I write, I just write. Basically the same way I would talk. There were a very few posts when I did some research because I wanted to make it a useful and comprehensive source for future look-ups, but in most cases I’m simply too lazy. I don’t spend much time on structure or thinking about how to guide the reader through my thoughts, it’s just for me and anyone who cares.

5. When do you feel most inspired to write?

Always in situations when I don’t have the time to write, never when I do have the time. Maybe there’s something wrong with me.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

Yes, mostly. I do have a couple of posts that I didn’t publish immediately, so they are still not published. I find it hard to re-iterate my own writing, so I try to avoid it by publishing immediately :)

7. Your favorite post on your blog?

The post I was looking up myself most often is the PostgreSQL migration thing. It was a good idea to write that down ;)

Postgres migration between multiple instances

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

I just did a major refactoring of the system, basically doing everying manually now. It forces me to keep things simple, because I think it should be simple to write and publish a text online. I also hope to have lowered the threshold for me to start writing new posts. So piloting the current system, it is.

February 18, 2025 12:00 AM

pass using stateless OpenPGP command line interface

Yesterday I wrote about how I am using a different tool for git signing and verification. Next, I replaced my pass usage. I have a small patch to use stateless OpenPGP command line interface (SOP). It is an implementation agonostic standard for handling OpenPGP messages. You can read the whole SPEC here.

Installation

cargo install rsop rsop-oct

And copied the bash script from my repository to the path somewhere.

The rsoct binary from rsop-oct follows the same SOP standard but uses the card to signing/decryption. I stored my public key in ~/.password-store/.gpg-key file, which is in turn used for encryption.

Usage

Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)

February 12, 2025 05:26 AM

Using openpgp-card-tool-git with git

One of the power of Unix systems comes from the various small tools and how they work together. One such new tool I am using for some time is for git signing & verification using OpenPGP and my Yubikey for the actual signing operation via openpgp-card-tool-git. I replaced the standard gpg for this usecase with the oct-git command from this project.

Installation & configuration

cargo install openpgp-card-tool-git

Then you will have to configuration your (in my case the global configuration) git configuration.

git config --global gpg.program <path to oct-git>

I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.

git config --global commit.gpgsign true
git config --global tag.gpgsign true

Usage

Before you start using it, you want to save the pin in your system keyring.

Use the following command.

oct-git --store-card-pin

That is it, now your git commit will sign the commits using oct-git tool.

In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.

February 11, 2025 11:12 AM

KubeCon + CloudNativeCon India 2024

Banner with KubeCon and Cloud Native Con India logos

Conference attendance had taken a hit since the onset of the COVID-19 pandemic. Though I attended many virtual conferences and was glad to present at a few like FOSDEM - a conference I had always longed to present at.

Sadly, the virtual conferences did not have the feel of in-person conferences. With 2024 here and being fully vaccinated, I started attending a few in-person conferences. The year started with FOSSASIA in Hanoi, Vietnam, followed by a few more over the next few months.

December 2024 was going to be special as we were all waiting for the first edition of KubeCon + CloudNativeCon in India. I had planned to attend the EU/NA editions of the conference, but visa issues made those more difficult to attend. As fate would have it, India was the one planned for me.

KubeCon + CloudNativeCon India 2024 took place in the capital city, Delhi, India from 11th - 12th December 2024, along with co-events hosted at the same venue, Yashobhoomi Convention Centre on 10th December 2024.

Venue

Let’s start with the venue. As a conference organizer for other conferences, the thing that blew my mind was the venue, YASHOBHOOMI (India International Convention and Expo Centre). The conference venue was huge to accommodate large scale conferences, and I also got to know that the convention centre is still in progress and there are more halls to come. If I heard correctly, there was another parallel conference running in the venue around the same time.

Now, let’s jump to the conference.

Maintainer Summit

The first day of the conference, 10th December 2024, was the CNCF Maintainers Summit. The event is exclusive for people behind CNCF projects, providing space to showcase their projects and meet other maintainers face-to-face.

Due to the chilly and foggy morning, the event started a bit late to accommodate more participants for the very first talk. The event had a total of six talks, including the welcome note. Our project, Flatcar Container Linux, also had a talk accepted: “A Maintainer’s Odyssey: Time, Technology and Transformation”.

This talk took attendees through the journey of Flatcar Container Linux from a maintainer’s perspective. It shared Flatcar’s inspiration - the journey from a “friendly fork” of CoreOS Container Linux to becoming a robust, independent, container-optimized Linux OS. The beginning of the journey shared the daunting red CI dashboard, almost-zero platform support, an unstructured release pipeline, a mammoth list of outdated packages, missing support for ARM architecture, and more – hardly a foundation for future initiatives. The talk described how, over the years, countless human hours were dedicated to transforming Flatcar, the initiatives we undertook, and the lessons we learned as a team. A good conversation followed during the Q&A with questions about release pipelines, architectures, and continued in the hallway track.

During the second half, I hosted an unconference titled “Special Purpose Operating System WG (SPOS WG) / Immutable OSes”. The aim was to discuss the WG with other maintainers and enlighten the audience about it. During the session, we had a general introduction to the SPOS WG and immutable OSes. It was great to see maintainers and users from Flatcar, Fedora CoreOS, PhotonOS, and Bluefin joining the unconference. Since most attendees were new to Immutable OSes, many questions focused on how these OSes plug into the existing ecosystem and the differences between available options. A productive discussion followed about the update mechanism and how people leverage the minimal management required for these OSes.

I later joined the Kubeflow unconference. Kubeflow is a Kubernetes-native platform that orchestrates machine learning workflows through custom controllers. It excels at managing ML systems with a focus on creating independent microservices, running on any infrastructure, and scaling workloads efficiently. Discussion covered how ML training jobs utilize batch processing capabilities with features like Job Queuing and Fault Tolerance - Inference workloads operate in a serverless manner, scaling pods dynamically based on demand. Kubeflow abstracts away the complexity of different ML frameworks (TensorFlow, PyTorch) and hardware configurations (GPUs, TPUs), providing intuitive interfaces for both data scientists and infrastructure operators.

Conference Days

During the conference days, I spent much of my time at the booth and doing final prep for my talk and tutorial.

On the maintainers summit day, I went to check the room for the conference days, but discovered that room didn’t exist in the venue. So, on the conference days, I started by informing the organizers about the schedule issue. Then, I proceeded to the keynote auditorium, where Chris Aniszczyk, CTO, Linux Foundation (CNCF), kicked off the conference by sharing updates about the Cloud Native space and ongoing initiatives. This was followed by Flipkart’s keynote talk and a wonderful, insightful panel discussion. Nikhita’s keynote on “The Cloud Native So Far” is a must-watch, where she talked about CNCF’s journey until now.

After the keynote, I went to the speaker’s room, prepared briefly, and then proceeded to the community booth area to set up the Flatcar Container Linux booth. The booth received many visitors. Being alone there, I asked Anirudha Basak, a Flatcar contributor, to help for a while. People asked all sorts of questions, from Flatcar’s relevance in the CNCF space to how it works as a container host and how they could adapt Flatcar in their infrastructure.

Around 5 PM, I wrapped up the booth and went to my talk room to present “Effortless Clustering: Rethinking ClusterAPI with Systemd-Sysext”. The talk covered an introduction to systemd-sysext, Flatcar & Cluster API. It then discussed how the current setup using Image Builder poses many infrastructure challenges, and how we’ve been utilizing systemd to resolve these challenges and simplify using ClusterAPI with multiple providers. The post-talk conversation was engaging, as we discussed sysext, which was new to many attendees, leading to productive hallway track discussions.

Day 2 began with me back in the keynote hall. First up were Aparna & Sumedh talking about Shopify using GenAI + Kubernetes for workloads, followed by Lachie sharing the Kubernetes story with Mandala and Indian contributors as the focal point. As an enthusiast photographer, I particularly enjoyed the talk presented through Lachie’s own photographs.

Soon after, I proceeded to my tutorial room. Though I had planned to follow the Flatcar tutorial we have, the AV setup broke down after the introductory start, and the session turned into a Q&A. It was difficult to regain momentum. The middle section was filled mostly with questions, many about Flatcar’s security perspective and its integration. After the tutorial wrapped up, lunch time was mostly taken up by hallway track discussions with tutorial attendees. We had the afternoon slot on the second day for the Flatcar booth, though attendance decreased as people began leaving for the conference’s end. The range of interactions remained similar, with some attendees from talks and workshops visiting the booth for longer discussions. I managed to squeeze in some time to visit the Microsoft booth at the end of the conference.

Overall, I had an excellent experience, and kudos to the organizers for putting on a splendid show.

Takeaways

Being at a booth representing Flatcar for the first time was a unique experience, with a mix of people - some hearing about Flatcar for the first time and confusing it with container images, requiring explanation, and others familiar with container hosts & Flatcar bringing their own use cases. Questions ranged from update stability to implementing custom modifications required by internal policies, SLSA, and more. While I’ve managed booths before, this was notably different. Better preparation regarding booth displays, goodies, and Flatcar resources would have been helpful.

The talk went well, but presenting a tutorial was a different experience. I had expected hands-on participation, having recently conducted a successful similar session at rootconf. However, since most KubeCon attendees didn’t bring computers, I plan to modify my approach for future KubeCon tutorials.

At the booth, I also received questions about WASM + Flatcar, as Flatcar was categorized under WASM in the display.


Credits in the photos goes to CNCF posted in the Kubecon + CloudNativeCon India 2024 Flickr album & to @vipulgupta.travel

February 05, 2025 12:00 AM

Pixelfed on Docker

I am running a Pixelfed instance for some time now at https://pixel.kushaldas.photography/kushal. This post contains quick setup instruction using docker/containers for the same.

screenshot of the site

Copy over .env.docker file

We will need .env.docker file and modify it as required, specially the following, you will have to write the values for each one of them.

APP_NAME=
APP_DOMAIN=
OPEN_REGISTRATION="false"   # because personal site
ENFORCE_EMAIL_VERIFICATION="false" # because personal site
DB_PASSWORD=

# Extra values to db itself
MYSQL_DATABASE=
MYSQL_PASSWORD=
MYSQL_USER=

CACHE_DRIVER="redis"
BROADCAST_DRIVER="redis"
QUEUE_DRIVER="redis"
SESSION_DRIVER="redis"

REDIS_HOST="redis"

ACITIVITY_PUB="true"

LOG_CHANNEL="stderr"

The actual docker compose file:

---

services:
  app:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    depends_on:
      - db
      - redis
    # The port statement makes Pixelfed run on Port 8080, no SSL.
    # For a real instance you need a frontend proxy instead!
    ports:
      - "8080:80"

  worker:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    entrypoint: /worker-entrypoint.sh
    depends_on:
      - db
      - redis
      - app
    healthcheck:
      test: php artisan horizon:status | grep running
      interval: 60s
      timeout: 5s
      retries: 1

  db:
    image: mariadb:11.2
    restart: unless-stopped
    env_file:
      - ./.env
    environment:
      - MYSQL_ROOT_PASSWORD=CHANGE_ME
    volumes:
      - "/data/db-data:/var/lib/mysql"

  redis:
    image: zknt/redis
    restart: unless-stopped
    volumes:
      - "redis-data:/data"

volumes:
  redis-data:

I am using nginx as the reverse proxy. Only thing to remember there is to pass .well-known/acme-challenge to the correct directory for letsencrypt, the rest should point to the contianer.

January 31, 2025 05:44 AM

Dealing with egl_bad_alloc error for webkit

I was trying out some Toga examples, and for the webview I kept getting the following error and a blank screen.

Could not create EGL surfaceless context: EGL_BAD_ALLOC.

After many hours of searching I reduced the reproducer to a simple Python Gtk code.

import gi

gi.require_version('Gtk', '3.0')
gi.require_version('WebKit2', '4.0')

from gi.repository import Gtk, WebKit2

window = Gtk.Window()
window.set_default_size(800, 600)
window.connect("destroy", Gtk.main_quit)

scrolled_window = Gtk.ScrolledWindow()
webview = WebKit2.WebView()
webview.load_uri("https://getfedora.org")
scrolled_window.add(webview)

window.add(scrolled_window)
window.show_all()
Gtk.main()

Finally I asked for help in #fedora IRC channel, within seconds Khaytsus gave me the fix:

WEBKIT_DISABLE_COMPOSITING_MODE=1 python g.py

working webview

January 18, 2025 07:43 AM

About

I’m Nabarun Pal, also known as palnabarun or theonlynabarun, a distributed systems engineer and open source contributor with a passion for building resilient infrastructure and fostering collaborative communities. Currently, I work on Kubernetes and cloud-native technologies, contributing to the ecosystem that powers modern distributed applications.

When I’m not deep in code or community discussions, you can find me planning my next adventure, brewing different coffee concoctions, tweaking my homelab setup, or exploring new mechanical keyboards. I believe in the power of open source to democratize technology and create opportunities for everyone to contribute and learn.

A detailed view of my speaking engagements are in the /speaking page.

January 06, 2025 12:00 AM

Keynote at PyLadiesCon!

Since the very inception of my journey in Python and PyLadies, I have always thought of having a PyLadies Conference, a celebration of PyLadies. There were conversations here and there, but nothing was fruitful then. In 2023, Mariatta, Cheuk, Maria Jose, and many more PyLadies volunteers around the globe made this dream come true, and we had our first ever PyLadiesCon.
I submitted a talk for the first-ever PyLadiesCon (how come I didn&apost?), and it was rejected. In 2024, I missed the CFP deadline. I was sad. Will I never be able to participate in PyLadiesCon?

On October 10th, 2024, I had my talk at PyCon NL. I woke up early to practice. I saw an email from PyLadiesCon, titled "Invitation to be a Keynote Speaker at PyLadiesCon". The panic call went to Kushal Das. "Check if there is any attack in the Python server? I got a spamming email about PyLadiesCon and the address is correct. "No, nothing.", replied Kushal after checking. Wait then "WHAT???". PyLadiesCon wants me to give the keynote. THE KEYNOTE in PyLadiesCon.

Thank you Audrey for conceptualizing and creating PyLadies, our home.

keynote_pyladiescon.png

And here I am now. I will give the keynote on 7 December 2024 at PyLadiesCon on how PyLadies gave me purpose. See you all there.

Dreams do come true.

by Anwesha Das at November 29, 2024 05:35 PM

Looking back to Euro Python 2024

Over the years, when  I am low, I always go to the 2014 Euro Python talk  "Farewell and Welcome Home: Python in Two Genders" by Naomi. It has become the first step of my coping mechanism and the door to my safe house. Though 2024 marked my Euro Python journey in person, I had a long connection and respect for the conference. A conference that believes community matters, human values and feelings matter, and not afraid to walk the talk. And how the conference stood up to my expectations in every bit.

euro_python_3.jpeg

My Talk: Intellectual Property Law 101

I had my talk on Intellectual Property Law, on the first day. After a long time, I was giving a talk on the legal topic. This talk was dedicated to the developers. So, I concentrated on only those issues which concerned the developers. Tried to stitch the concerned topics Patent, Trademarks, and Copyright together. For the smooth flow of the talk, since it becomes easier for the developers to understand and remember for all the practical purposes for future use. I was concerned if I would be able to connect with people. Later, people came to  me with several related questions, starting from

  • Why should I be concerned about patents?

  • Which license would fit my project?

  • Should I be scared about any Trademarks granted to other organizations under some other jurisdiction?

So on and so forth. Though I could not finish the whole talk due to time constraints, I am happy with the overall review.

Panel: Open Source Sustainability

On Day 1 of the main conference, we had the panel on Open Source Sustainability. This topic lies at the core of open-source ecosystem sustainability for the projects and community for the future and stability. The panel had Deb Nicholson, Armin Ronacher Çağıl Uluşahin Sönmez,Deb Nicholson, Samuel Colvin, and me and Artur Czepiel as  the moderator.  I was happy to represent my community&aposs side. It was a good discussion, and hopefully, we could give answers to some questions of the community in general.

Birds of Feather session: Open Source Release Management

This Birds of Feathers (BoF) session is intended to deal with the Release Management of various Open Source projects, irrespective of their size. The discussion includes all projects, from a community-led project to projects maintained/initiated by big enterprises, from a project maintained by one contributor to a project with several hundred contributors.

  • What methods do we follow regarding versioning, release cadence, and the process?

  • Do most of us follow manual processes or depend on automated ones?

  • What works and what does not, and how can we improve our lives?

  • What are the significant points that make the difference?

We discussed and covered the following topics: different aspects of release management of Open-Source projects, security, automation, CI usage, and documentation. We followed the Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

PyLadies Lunch

And then comes my favorite part of the conference: PyLadies Lunch. It was my seventh PyLadies lunch, and I was moderating it for the fifth time. But this time, my wonderful friends [Laís] and Çağıl were by my side, holding me up when I failed. I love every time I am at a PyLadies lunch. This is where I get my strength, energy, and love.

Workshop

I attended two workshops organized by Anezka Muller , Mia Bajić and all amazing PyLadies organizers

  • Self-defense workshop where the moderators helped us navigate challenging situations we face in life, safeguard ourselves from them, and overcome them.

  • I AM Remarkable workshop, where we learned to tell people about our successes.

Representing Ansible Community

I always take the chance to meet the Ansible community members face-to-face. Euro Python gave me another opportunity to do that. I learned about different user stories that we do not get to hear from our work corners, and I learned about these unique problems and their solutions in Ansible. 
Fun fact : Maarten gave a review after knowing I am Anwesha from the Ansible project. He said, &aposCan you Ansible people slow down in releasing new versions of Ansible? Every time we get used to it, we have a new version.&apos

euro_python_1.jpeg

Acknowledging mental health issues

The proudest moment for me personally was when I acknowledged my mental health issues and later when people came to me saying how they relate to me and how they felt empowered when I mentioned this.

euro_python_2.jpeg

PyLadies network at Red Hat

A network of PyLadies within Red Hat has been my dream since I joined Red Hat. She also agreed when I shared this with Karolina at last year&aposs DevConf. And finally, we initiated on day 2 of the conference. We are so excited for the future to come.

Meeting friends

Conference means friends. It was so great to meet so many friends after such a long time Tylor, Nicholas, Naomi, Honza, Carol, Mike, Artur, Nikita, Valerio and many new ones Jannis Joana,[Chirstian], Martina Tereza , Maria, Alyona, Mia, Naa , Bojanand Jodie. A special note of love to Jodie, you to hold my hand and take me out of the dark.

euro_python_4.jpeg

The best is saved for the last. Euro Python 2024 made 3 of my dreams come true.

  • Gender Neutral Washrooms

  • Sanitary products in restrooms (I remember carrying sanitary napkins in my bag pack in PyCon India and telling girls if they needed it, it was available in the PyLadies booth).

  • Neo-diversity bag (which saved me at the conference; thank you, Karolina, for this)

euro_python_0.jpeg

I cannot wait for the next Euro Python; see you all at Euro Python 2025.

PS: Thanks to Lias, I will always have a small piece of Euro Python 2024 with me. I know I am loved and cared for.

by Anwesha Das at July 17, 2024 11:42 AM

A Tragic Collision: Lessons from the Pune Porsche Accident

I’m writing a blog after a very long time, as I kept procrastinating, but today I decided to write about something important and yes, it is a hot topic in the country right now. In Pune, a 17-year-old boy was driving a Porsche while under the influence of alcohol. As I read in the news, he was speeding, and while speeding, his car hit a two-wheeler vehicle, resulting in the death of two young people who were techies.
June 03, 2024 11:39 AM

Subscriptions

Planetorium