Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

KubeCon + CloudNativeCon India 2024

Banner with KubeCon and Cloud Native Con India logos

Conference attendance had taken a hit since the onset of the COVID-19 pandemic. Though I attended many virtual conferences and was glad to present at a few like FOSDEM - a conference I had always longed to present at.

Sadly, the virtual conferences did not have the feel of in-person conferences. With 2024 here and being fully vaccinated, I started attending a few in-person conferences. The year started with FOSSASIA in Hanoi, Vietnam, followed by a few more over the next few months.

December 2024 was going to be special as we were all waiting for the first edition of KubeCon + CloudNativeCon in India. I had planned to attend the EU/NA editions of the conference, but visa issues made those more difficult to attend. As fate would have it, India was the one planned for me.

KubeCon + CloudNativeCon India 2024 took place in the capital city, Delhi, India from 11th - 12th December 2024, along with co-events hosted at the same venue, Yashobhoomi Convention Centre on 10th December 2024.

Venue

Let’s start with the venue. As a conference organizer for other conferences, the thing that blew my mind was the venue, YASHOBHOOMI (India International Convention and Expo Centre). The conference venue was huge to accommodate large scale conferences, and I also got to know that the convention centre is still in progress and there are more halls to come. If I heard correctly, there was another parallel conference running in the venue around the same time.

Now, let’s jump to the conference.

Maintainer Summit

The first day of the conference, 10th December 2024, was the CNCF Maintainers Summit. The event is exclusive for people behind CNCF projects, providing space to showcase their projects and meet other maintainers face-to-face.

Due to the chilly and foggy morning, the event started a bit late to accommodate more participants for the very first talk. The event had a total of six talks, including the welcome note. Our project, Flatcar Container Linux, also had a talk accepted: “A Maintainer’s Odyssey: Time, Technology and Transformation”.

This talk took attendees through the journey of Flatcar Container Linux from a maintainer’s perspective. It shared Flatcar’s inspiration - the journey from a “friendly fork” of CoreOS Container Linux to becoming a robust, independent, container-optimized Linux OS. The beginning of the journey shared the daunting red CI dashboard, almost-zero platform support, an unstructured release pipeline, a mammoth list of outdated packages, missing support for ARM architecture, and more – hardly a foundation for future initiatives. The talk described how, over the years, countless human hours were dedicated to transforming Flatcar, the initiatives we undertook, and the lessons we learned as a team. A good conversation followed during the Q&A with questions about release pipelines, architectures, and continued in the hallway track.

During the second half, I hosted an unconference titled “Special Purpose Operating System WG (SPOS WG) / Immutable OSes”. The aim was to discuss the WG with other maintainers and enlighten the audience about it. During the session, we had a general introduction to the SPOS WG and immutable OSes. It was great to see maintainers and users from Flatcar, Fedora CoreOS, PhotonOS, and Bluefin joining the unconference. Since most attendees were new to Immutable OSes, many questions focused on how these OSes plug into the existing ecosystem and the differences between available options. A productive discussion followed about the update mechanism and how people leverage the minimal management required for these OSes.

I later joined the Kubeflow unconference. Kubeflow is a Kubernetes-native platform that orchestrates machine learning workflows through custom controllers. It excels at managing ML systems with a focus on creating independent microservices, running on any infrastructure, and scaling workloads efficiently. Discussion covered how ML training jobs utilize batch processing capabilities with features like Job Queuing and Fault Tolerance - Inference workloads operate in a serverless manner, scaling pods dynamically based on demand. Kubeflow abstracts away the complexity of different ML frameworks (TensorFlow, PyTorch) and hardware configurations (GPUs, TPUs), providing intuitive interfaces for both data scientists and infrastructure operators.

Conference Days

During the conference days, I spent much of my time at the booth and doing final prep for my talk and tutorial.

On the maintainers summit day, I went to check the room for the conference days, but discovered that room didn’t exist in the venue. So, on the conference days, I started by informing the organizers about the schedule issue. Then, I proceeded to the keynote auditorium, where Chris Aniszczyk, CTO, Linux Foundation (CNCF), kicked off the conference by sharing updates about the Cloud Native space and ongoing initiatives. This was followed by Flipkart’s keynote talk and a wonderful, insightful panel discussion. Nikhita’s keynote on “The Cloud Native So Far” is a must-watch, where she talked about CNCF’s journey until now.

After the keynote, I went to the speaker’s room, prepared briefly, and then proceeded to the community booth area to set up the Flatcar Container Linux booth. The booth received many visitors. Being alone there, I asked Anirudha Basak, a Flatcar contributor, to help for a while. People asked all sorts of questions, from Flatcar’s relevance in the CNCF space to how it works as a container host and how they could adapt Flatcar in their infrastructure.

Around 5 PM, I wrapped up the booth and went to my talk room to present “Effortless Clustering: Rethinking ClusterAPI with Systemd-Sysext”. The talk covered an introduction to systemd-sysext, Flatcar & Cluster API. It then discussed how the current setup using Image Builder poses many infrastructure challenges, and how we’ve been utilizing systemd to resolve these challenges and simplify using ClusterAPI with multiple providers. The post-talk conversation was engaging, as we discussed sysext, which was new to many attendees, leading to productive hallway track discussions.

Day 2 began with me back in the keynote hall. First up were Aparna & Sumedh talking about Shopify using GenAI + Kubernetes for workloads, followed by Lachie sharing the Kubernetes story with Mandala and Indian contributors as the focal point. As an enthusiast photographer, I particularly enjoyed the talk presented through Lachie’s own photographs.

Soon after, I proceeded to my tutorial room. Though I had planned to follow the Flatcar tutorial we have, the AV setup broke down after the introductory start, and the session turned into a Q&A. It was difficult to regain momentum. The middle section was filled mostly with questions, many about Flatcar’s security perspective and its integration. After the tutorial wrapped up, lunch time was mostly taken up by hallway track discussions with tutorial attendees. We had the afternoon slot on the second day for the Flatcar booth, though attendance decreased as people began leaving for the conference’s end. The range of interactions remained similar, with some attendees from talks and workshops visiting the booth for longer discussions. I managed to squeeze in some time to visit the Microsoft booth at the end of the conference.

Overall, I had an excellent experience, and kudos to the organizers for putting on a splendid show.

Takeaways

Being at a booth representing Flatcar for the first time was a unique experience, with a mix of people - some hearing about Flatcar for the first time and confusing it with container images, requiring explanation, and others familiar with container hosts & Flatcar bringing their own use cases. Questions ranged from update stability to implementing custom modifications required by internal policies, SLSA, and more. While I’ve managed booths before, this was notably different. Better preparation regarding booth displays, goodies, and Flatcar resources would have been helpful.

The talk went well, but presenting a tutorial was a different experience. I had expected hands-on participation, having recently conducted a successful similar session at rootconf. However, since most KubeCon attendees didn’t bring computers, I plan to modify my approach for future KubeCon tutorials.

At the booth, I also received questions about WASM + Flatcar, as Flatcar was categorized under WASM in the display.


Credits in the photos goes to CNCF posted in the Kubecon + CloudNativeCon India 2024 Flickr album & to @vipulgupta.travel

February 05, 2025 12:00 AM

Blog Questions Challenge 2025

Ava started this, Kev modified this, and Saptak egged me on to write this. So here goes …

screenshot of my homepage

What the home page looks like in 2025


1. Why did you make the blog in the first place?

I write for me mostly. Because writing helps me think. My thoughts are too scattered otherwise. I can’t not write. I’ve always written. Privately, publicly, there’s always been some place where I’ve jotted things down.

2. What platform are you using to manage your blog and why did you choose it?

I use Hugo to generate the site, which I host on my own Hetzner VM. I use it because I outgrew my previous tool, Nikola, which still holds a dear place in my heart. While Hugo is enormously complex, it is also deceptively simple enough to get started with. And it’s fast. That’s what I love about it. It lets me write. It does not get in my way. It lets me preview what I’m doing with its live server. And it’s unbelievably fast.

3. Have you blogged on other platforms before?

I’ve been writing in some form or other since the late 90s. So … yea :)
Livejournal, Blogger, self hosted Wordpress, wordpress.com, Posterous, Tumblr, self hosted Wordpress, self hosted Ghost, Nikola and now Hugo. It’s been quite a ride!

4. How do you write your posts?

I write them in Emacs (in Markdown, using Markdown Mode) on my desktop, with Hugo Server running alongside giving me a preview of what things will look like. Once I commit it to my self hosted Forgejo instance, an action publishes the site automatically

5. When do you feel most inspired to write?

I never do. I write because it helps me function. And yet, it always feels like a chore.

6. Do you publish immediately after writing or do you let it simmer a bit as a draft?

I always publish it immediately. I almost never write something that is deeply thought out, that needs to stand the test of time. It’s the process, the writing, the putting words out of my mind, through my fingers down to paper, that leads to the result, the thought, the opinion, the aha, the insight. So I’m never quite done. Which means if I ever wait for finished, the post will never get published. The moment I publish something, is invariably the moment something needs changing. So I just go back and edit it. I never notify folks about updating things on the first day. If I ever edit something much later than a day or two, then I do.

7. Your favorite post on your blog?

None. All. They’re my thoughts, so depending on my mood, they’re either worthless or priceless gems!

8. Any future plans for your blog? Maybe a redesign, changing the tag system, etc.?

Not really. For all my wandering, I’ve only ever moved when my tools outgrew me1, or I outgrew my tools. For now, Hugo does all I ask of it, without getting in the way. The day that changes, will be the day I move.


I’ll ask Priyanka, Sreeram, Sandeep, Pradhvan, Rahul, Bhavin, Elia, Mandar, Saptak, Farhaan, Robin and Kushal to share more, if they have the time, energy and the inclination.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. Wordpress.com too restrictive, Ghost stopped serving my specific needs etc. ↩︎

February 04, 2025 06:40 AM

On Neil Gaiman

Considering how much I’ve quoted Neil Gaiman (here, here, here, here, here and many other places on the blog) and how much his stories have influenced me, I feel a bit obligated to put this personal statement out.

What he did was really, really wrong!
The girls, the women, were wronged. Grossly so. Often violently so.

Never meet your heroes and idols with feet of clay and all that.

So I’ve given away (or deleted) all my Gaiman books, save two. My collected editions of Sandman. And my signed copy of What you Need to Be Warm.
While it is true that Gaiman shot to stardom with Sandman, that was not the reason I bought this collected edition. I bought it for the young boy, who would scramble the lanes of Matunga and Fort, looking for more erudite comics after reading Moore’s V for Vendetta and Watchmen. Sandman was something I discovered on my own and enjoyed so much.
Besides it was never about the writing at that stage. It was the stories. From all over the world and across cultures. That he’d reimagine for Sandman. (Ramadan, A Midsummer Night’s Dream, Thermidor, The Dream Hunters … ) And even more importantly it was the pictures, the drawings, the gorgeous art. (Yoshitaka Amano, Dave McKean, Todd Klein, and all the others) So Sandman stays. And with What You Need to Be Warm, the money went to a smol shop and to a good cause both. So I don’t feel bad owning it.

“Man’s not dead while his name is still spoken” — Terry Pratchett

And so this is the last, I speak your name. You’re dead to me.


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


February 03, 2025 08:18 AM

Hack Hugo Post Metadata With Python

A while back, I rejigged the sections on my site to better reflect how I think and write.
Which meant, all the urls, on all my posts changed, since they now used the new category as a slug, instead of ye old /blog.

For e.g. https://janusworx.com/blog/using-hugo-variables-to-help-with-mailto-links-in-hugo/
was now at, https://janusworx.com/work/using-hugo-variables-to-help-with-mailto-links-in-hugo/

After searching a bit, I found that Hugo supported aliases. For me, it would redirect the original /blog path url to its new location
Ass I had to do, was add an aliases: ["/blog/old-post-slug"] line to each post’s metadata.1
Line 4 in the snippet below shows, what I added to fix the post above.

1
2
3
4
5
6
7
8
---
title: "Using Hugo Variables to Help With Mailto Links in Hugo"
date: 2024-05-30T18:17:35+05:30
aliases: ["/blog/using-hugo-variables-to-help-with-mailto-links-in-hugo"]
categories: ["work"]
tags: [100WordHabit, Dgplug, Hugo]
summary: Shortcodes! Hugo Variables in Shortcodes!
---

I did not want to do this by hand for 800+ posts.
One stroke of luck for me, was that I let Hugo use its default behaviour of generating url slugs from the file names. So even if the category slugs had changed (from /blog to /work or from /blog to /personal), the url slugs would stay the same. Which meant, I could whip up a script to run through all my markdown posts and add the alias line.
So I did.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from pathlib import Path

INPUT_FOLDER = Path("old-posts-folder")
OUTPUT_FOLDER = Path("modified-posts-folder")

for each_file in INPUT_FOLDER.iterdir():
    with open(each_file, 'r') as file_to_read:
        alias_derived_from_file = each_file.stem
        contents_as_list = file_to_read.readlines()
        contents_as_list.insert(3, f"aliases: [\"/blog/{alias_derived_from_file}\"]\n")
        with open(Path(OUTPUT_FOLDER, each_file.name), 'w+') as file_to_write:
            file_to_write.writelines(contents_as_list)


It takes all the posts from my old folder, inserts the alias line and puts them into a new folder.2 In essence, take each file, figure out the url slug from the file name, read in the contents as a list, insert my alias at position 3 (fourth actually. zero based indexing) of the list (below the title and date) and then write it all out to a new file.

I ran it, published the site and then went to check on the old urls with bated breath.
Hurrah, it all worked :)


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. it’s a list, so I can add more aliases if I want to ↩︎

  2. no sense in botching up my originals :) ↩︎

February 03, 2025 04:03 AM

Tmux Start Session Maximized With Three Panes

So I got tired of starting up a dedicated tmux session to manage all the work related to my writing sessions. Over the past few months, I’ve boiled it down to three.
And it still irks me that I have to …

  1. Launch Terminal
  2. Launch Tmux
  3. Split it into three windows err … panes.
  4. Go to the top left pane and launch Hugo server
  5. Switch to the right pane and then launch Emacs with whatever new post I want to write today.

So of course, a Rube Goldberg-esque, tiny bash-pipey monster took form. It now resides, chained to an alias, hssx1 in my .bash_aliases file

1
2
3
alias hssx='cd /path/to/my/hugo/folder && \
xdotool windowsize $(xdotool getactivewindow) 100% 100% && \
tmux new-session \; split-window -h \; select-pane -l \; split-window -v \; select-pane -U \; send-keys "hugo serve" C-m \; select-pane -R'
  1. The first line switches to my hugo folder
  2. The second calls xdotool and maximises the terminal window
  3. And the last line is a series of instructions to the tmux command. I’ve split it below for readability.
tmux new-session \; #Tmux start a new session
split-window -h \; # Split the window into two vertical panes
select-pane -l \; # Switch to the left pane
split-window -v \; # Split that into two horizontal panes
select-pane -U \; # Select the upper pane
send-keys "hugo serve" C-m \; # Type in `hugo serve` followed by Enter
select-pane -R # Select the right pane

And boom!

bash terminal showing a tmux window split into three panes

Click pic for a larger version


Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. hugo start session and x just because all my aliases have ended with x for years and years ↩︎

February 01, 2025 03:57 AM

2025

What do all the stars and daggers after the book titles mean?


Note to self, for this year: Read less, write more notes. Abandon more books.

January

  1. Murder at the Vicarage, Agatha Christie*
  2. The Body in the Library, Agatha Christie*
  3. The Moving Finger, Agatha Christie*
  4. Sleeping Murder, Agatha Christie*
  5. A Murder Is Announced, Agatha Christie*
  6. They Do It with Mirrors, Agatha Christie*
  7. My Horrible Career, John Arundel*
  8. The Veiled Lodger, Sherlock & Co. Podcast*
  9. Hardcore History, Mania for Subjugation II, Episode 72*
  10. A Pocket Full of Rye, Agatha Christie*
  11. 4.50 from Paddington, Agatha Christie*
  12. The Mirror Crack’d From Side to Side, Agatha Christie*
  13. As You Wish: Inconceivable Tales from the Making of The Princess Bride, Cary Elwes & Joe Layden*
  14. A Caribbean Mystery, Agatha Christie*
  15. At Bertram’s Hotel, Agatha Christie*
  16. Nemesis, Agatha Christie*
  17. Miss Marple’s Final Cases, Agatha Christie*

January 31, 2025 06:30 PM

Pixelfed on Docker

I am running a Pixelfed instance for some time now at https://pixel.kushaldas.photography/kushal. This post contains quick setup instruction using docker/containers for the same.

screenshot of the site

Copy over .env.docker file

We will need .env.docker file and modify it as required, specially the following, you will have to write the values for each one of them.

APP_NAME=
APP_DOMAIN=
OPEN_REGISTRATION="false"   # because personal site
ENFORCE_EMAIL_VERIFICATION="false" # because personal site
DB_PASSWORD=

# Extra values to db itself
MYSQL_DATABASE=
MYSQL_PASSWORD=
MYSQL_USER=

CACHE_DRIVER="redis"
BROADCAST_DRIVER="redis"
QUEUE_DRIVER="redis"
SESSION_DRIVER="redis"

REDIS_HOST="redis"

ACITIVITY_PUB="true"

LOG_CHANNEL="stderr"

The actual docker compose file:

---

services:
  app:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    depends_on:
      - db
      - redis
    # The port statement makes Pixelfed run on Port 8080, no SSL.
    # For a real instance you need a frontend proxy instead!
    ports:
      - "8080:80"

  worker:
    image: zknt/pixelfed:2025-01-18
    restart: unless-stopped
    env_file:
      - ./.env
    volumes:
      - "/data/app-storage:/var/www/storage"
      - "./.env:/var/www/.env"
    entrypoint: /worker-entrypoint.sh
    depends_on:
      - db
      - redis
      - app
    healthcheck:
      test: php artisan horizon:status | grep running
      interval: 60s
      timeout: 5s
      retries: 1

  db:
    image: mariadb:11.2
    restart: unless-stopped
    env_file:
      - ./.env
    environment:
      - MYSQL_ROOT_PASSWORD=CHANGE_ME
    volumes:
      - "/data/db-data:/var/lib/mysql"

  redis:
    image: zknt/redis
    restart: unless-stopped
    volumes:
      - "redis-data:/data"

volumes:
  redis-data:

I am using nginx as the reverse proxy. Only thing to remember there is to pass .well-known/acme-challenge to the correct directory for letsencrypt, the rest should point to the contianer.

January 31, 2025 05:44 AM

Dealing with egl_bad_alloc error for webkit

I was trying out some Toga examples, and for the webview I kept getting the following error and a blank screen.

Could not create EGL surfaceless context: EGL_BAD_ALLOC.

After many hours of searching I reduced the reproducer to a simple Python Gtk code.

import gi

gi.require_version('Gtk', '3.0')
gi.require_version('WebKit2', '4.0')

from gi.repository import Gtk, WebKit2

window = Gtk.Window()
window.set_default_size(800, 600)
window.connect("destroy", Gtk.main_quit)

scrolled_window = Gtk.ScrolledWindow()
webview = WebKit2.WebView()
webview.load_uri("https://getfedora.org")
scrolled_window.add(webview)

window.add(scrolled_window)
window.show_all()
Gtk.main()

Finally I asked for help in #fedora IRC channel, within seconds Khaytsus gave me the fix:

WEBKIT_DISABLE_COMPOSITING_MODE=1 python g.py

working webview

January 18, 2025 07:43 AM

pastewindow.nvim my first neovim plugin

pastewindow is a neovim plugin written in Lua to help to paste text from a buffer to a different window in Neovim. This is my first attempt of writing a plugin.

We can select a window (in the GIF below I am using a bash terminal as target) and send any text to that window. This will be helpful in my teaching sessions. Specially modifying larger Python functions etc.

demo

I am yet to go through all the Advent of Neovim videos from TJ DeVries. I am hoping to improve (and more features) to the plugin after I learn about plugin development from the videos.

December 27, 2024 08:19 AM

Open Source talk at KTH computer science students organization

screenshot of the talk page

Last Tuesday, during lunch hours I had a talk at KTH computer science students' organization. The topic was Open Source and career. My main goal was tell the attendees that contribution size does not matter, but continuing contributing to various projects can change someone's life and career in a positive way. I talked about the history of the Free Software movement and Open Source. I also talked a bit about Aaron Swartz and asked the participants to watch the documentary The Internet's Own Boy. Some were surprised to hear about Sunet's Open Source work.

photo of KTH logo on the building

There were around 70 people and few people later message how they think about contribution after my talk. The best part was one student who messaged next day and said that he contributed one small patch to a project.

I also told them about PyLadies Stockholm and other local efforts from various communities. There was also a surprising visit of the #curl channel on IRC, thanks to bagder and icing :)

December 14, 2024 11:17 AM

Basedpyright and neovim

screenshot from the website

Basedpyright is a fork of pyright with various type checking improvements, improved vscode support and pylance features built into the language server. It has a list of benefits over Pyright.

In case you want to use that inside of neovim using Mason, you will have to remember to have the configuration inside of a settings key. The following is from my setup.

basedpyright = {
  settings = {
    basedpyright = {
      analysis = {
        diagnosticMode = 'openFilesOnly',
        typeCheckingMode = 'basic',
        capabilities = capabilities,
        useLibraryCodeForTypes = true,
        diagnosticSeverityOverrides = {
          autoSearchPaths = true,
          enableTypeIgnoreComments = false,
          reportGeneralTypeIssues = 'none',
          reportArgumentType = 'none',
          reportUnknownMemberType = 'none',
          reportAssignmentType = 'none',
        },
      },
    },
  },
},

Struggled for a few hours to fix this couple of days ago.

December 03, 2024 08:50 PM

Keynote at PyLadiesCon!

Since the very inception of my journey in Python and PyLadies, I have always thought of having a PyLadies Conference, a celebration of PyLadies. There were conversations here and there, but nothing was fruitful then. In 2023, Mariatta, Cheuk, Maria Jose, and many more PyLadies volunteers around the globe made this dream come true, and we had our first ever PyLadiesCon.
I submitted a talk for the first-ever PyLadiesCon (how come I didn&apost?), and it was rejected. In 2024, I missed the CFP deadline. I was sad. Will I never be able to participate in PyLadiesCon?

On October 10th, 2024, I had my talk at PyCon NL. I woke up early to practice. I saw an email from PyLadiesCon, titled "Invitation to be a Keynote Speaker at PyLadiesCon". The panic call went to Kushal Das. "Check if there is any attack in the Python server? I got a spamming email about PyLadiesCon and the address is correct. "No, nothing.", replied Kushal after checking. Wait then "WHAT???". PyLadiesCon wants me to give the keynote. THE KEYNOTE in PyLadiesCon.

Thank you Audrey for conceptualizing and creating PyLadies, our home.

keynote_pyladiescon.png

And here I am now. I will give the keynote on 7 December 2024 at PyLadiesCon on how PyLadies gave me purpose. See you all there.

Dreams do come true.

by Anwesha Das at November 29, 2024 05:35 PM

Looking back to Euro Python 2024

Over the years, when  I am low, I always go to the 2014 Euro Python talk  "Farewell and Welcome Home: Python in Two Genders" by Naomi. It has become the first step of my coping mechanism and the door to my safe house. Though 2024 marked my Euro Python journey in person, I had a long connection and respect for the conference. A conference that believes community matters, human values and feelings matter, and not afraid to walk the talk. And how the conference stood up to my expectations in every bit.

euro_python_3.jpeg

My Talk: Intellectual Property Law 101

I had my talk on Intellectual Property Law, on the first day. After a long time, I was giving a talk on the legal topic. This talk was dedicated to the developers. So, I concentrated on only those issues which concerned the developers. Tried to stitch the concerned topics Patent, Trademarks, and Copyright together. For the smooth flow of the talk, since it becomes easier for the developers to understand and remember for all the practical purposes for future use. I was concerned if I would be able to connect with people. Later, people came to  me with several related questions, starting from

  • Why should I be concerned about patents?

  • Which license would fit my project?

  • Should I be scared about any Trademarks granted to other organizations under some other jurisdiction?

So on and so forth. Though I could not finish the whole talk due to time constraints, I am happy with the overall review.

Panel: Open Source Sustainability

On Day 1 of the main conference, we had the panel on Open Source Sustainability. This topic lies at the core of open-source ecosystem sustainability for the projects and community for the future and stability. The panel had Deb Nicholson, Armin Ronacher Çağıl Uluşahin Sönmez,Deb Nicholson, Samuel Colvin, and me and Artur Czepiel as  the moderator.  I was happy to represent my community&aposs side. It was a good discussion, and hopefully, we could give answers to some questions of the community in general.

Birds of Feather session: Open Source Release Management

This Birds of Feathers (BoF) session is intended to deal with the Release Management of various Open Source projects, irrespective of their size. The discussion includes all projects, from a community-led project to projects maintained/initiated by big enterprises, from a project maintained by one contributor to a project with several hundred contributors.

  • What methods do we follow regarding versioning, release cadence, and the process?

  • Do most of us follow manual processes or depend on automated ones?

  • What works and what does not, and how can we improve our lives?

  • What are the significant points that make the difference?

We discussed and covered the following topics: different aspects of release management of Open-Source projects, security, automation, CI usage, and documentation. We followed the Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

PyLadies Lunch

And then comes my favorite part of the conference: PyLadies Lunch. It was my seventh PyLadies lunch, and I was moderating it for the fifth time. But this time, my wonderful friends [Laís] and Çağıl were by my side, holding me up when I failed. I love every time I am at a PyLadies lunch. This is where I get my strength, energy, and love.

Workshop

I attended two workshops organized by Anezka Muller , Mia Bajić and all amazing PyLadies organizers

  • Self-defense workshop where the moderators helped us navigate challenging situations we face in life, safeguard ourselves from them, and overcome them.

  • I AM Remarkable workshop, where we learned to tell people about our successes.

Representing Ansible Community

I always take the chance to meet the Ansible community members face-to-face. Euro Python gave me another opportunity to do that. I learned about different user stories that we do not get to hear from our work corners, and I learned about these unique problems and their solutions in Ansible. 
Fun fact : Maarten gave a review after knowing I am Anwesha from the Ansible project. He said, &aposCan you Ansible people slow down in releasing new versions of Ansible? Every time we get used to it, we have a new version.&apos

euro_python_1.jpeg

Acknowledging mental health issues

The proudest moment for me personally was when I acknowledged my mental health issues and later when people came to me saying how they relate to me and how they felt empowered when I mentioned this.

euro_python_2.jpeg

PyLadies network at Red Hat

A network of PyLadies within Red Hat has been my dream since I joined Red Hat. She also agreed when I shared this with Karolina at last year&aposs DevConf. And finally, we initiated on day 2 of the conference. We are so excited for the future to come.

Meeting friends

Conference means friends. It was so great to meet so many friends after such a long time Tylor, Nicholas, Naomi, Honza, Carol, Mike, Artur, Nikita, Valerio and many new ones Jannis Joana,[Chirstian], Martina Tereza , Maria, Alyona, Mia, Naa , Bojanand Jodie. A special note of love to Jodie, you to hold my hand and take me out of the dark.

euro_python_4.jpeg

The best is saved for the last. Euro Python 2024 made 3 of my dreams come true.

  • Gender Neutral Washrooms

  • Sanitary products in restrooms (I remember carrying sanitary napkins in my bag pack in PyCon India and telling girls if they needed it, it was available in the PyLadies booth).

  • Neo-diversity bag (which saved me at the conference; thank you, Karolina, for this)

euro_python_0.jpeg

I cannot wait for the next Euro Python; see you all at Euro Python 2025.

PS: Thanks to Lias, I will always have a small piece of Euro Python 2024 with me. I know I am loved and cared for.

by Anwesha Das at July 17, 2024 11:42 AM

Euro Python 2024

It is July, and it is time for Euro Python, and 2024 is my first Euro Python. Some busy days are on the way. Like every other conference, I have my diary, and the conference days are full of various activities.

euro_travel_0.jpeg

Day 0 of the main conference

After a long time, I will give a legal talk. We are going to dig into some basics of Intellectual Property. What is it? Why do we need it? What are the different kinds of intellectual property? It is a legal talk designed for developers. So, anyone and everyone from the community with previous knowledge can understand the content and use it to understand their fundamental rights and duties as developers.Intellectual Property 101, the talk is scheduled at 11:35 hrs.

Day 1 of the main conference

Day 1 is PyLadies Day, a day dedicated to PyLadies. We have crafted the day with several different kinds of events. The day opens with a self-defense workshop at 10:30 hrs. PyLadies, throughout the world, aims to provide and foster a safe space for women and friends in the Python Community. This workshop is an extension of that goal. We will learn how to deal with challenging, inappropriate behavior.
In the community, at work, or in any social space. We will have a trained Psychologist as a session guide to help us. This workshop is so important, especially today as it was yesterday and may be in the future (at least until the enforcement of CoC is clear). I am so looking forward to the workshop. Thank you, Mia, Lias and all the PyLadies for organizing this and giving shape to my long-cherished dream.

Then we have my favorite part of the conference, PyLadies Lunch. I crafted the afternoon with a little introduction session, shout-out session, food, fun, laughter, and friends.

After the PyLadies Lunch, I have my only non-PyLadies session, which is a panel discussion on Open Source Sustainability. We will discuss the different aspects of sustainability in the open source space and community.

Again, it is PyLady&aposs time. Here, we have two sessions.

[IAmRemarkable](https://ep2024.europython.eu/pyladies-events#iamremarkable), to help you learn to empower you by celebrating your achievements and to fight your impostor syndrome. The workshop will help you celebrate your accomplishments and improve your self-promotion skills.

The second session is a 1:1 mentoring event, Meet & Greet with PyLadies. Here, the willing PyLadies will be able to mentor and be mentored. They can be coached in different subjects, starting with programming, learning, things related to job and/or career, etc.

Birds of feather session on Release Management of Open Source projects

It is an open discussion related to the release Management of the Open Source ecosystem.
The discussion includes everything from a community-led project to projects maintained/initiated by a big enterprise, a project maintained by one contributor to a project with several hundreds of contributor bases. What are the different methods we follow regarding versioning, release cadence, and the process itself? Do most of us follow manual processes or depend on automated ones? What works and what does not, and how can we improve our lives? What are the significant points that make the difference? We will discuss and cover the following topics: release management of open source projects, security, automation, CI usage, and documentation. In the discussion, I will share my release automation journey with Ansible. We will follow Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.

So, here comes the days of code, collaboration, and community. See you all there.

PS: I miss my little Py-Lady volunteering at the booth.

by Anwesha Das at July 08, 2024 09:56 AM

Event Driven Ansible, what, why and how?

Ansible Playbooks is the known term, now there is a new term which is being floted in the project, which is Ansible Rulebooks. Today we are going to discuss about Ansible&aposs journey from Playbook to Rulebook rather Playbook with Rulebook.

What is Event Driven Ansible?

What is Event Driven Ansible? In simple terms, some action is triggered by some events. The idea of EDA comes from Event driven architecture. Event driven ansible runs code automatically based on received event notifications.

Some important terms:

What is event in Event Driven Ansible?

The event is the notification of a certain incident.

Where do we get the events from?

We get the events from event sources. Ansible EDA provides different pulgins to support various event sources. There are several event source plugins such as :
url_check (checking the http status code), webhook (providing and checking events from webhook), journald (monitoring the journald logs) and the list goes on.

When to take actions?

Rulebook defines conditions and actions in case of fulfilling those actions. Conditions use operators as strings, boolean and numerical data. And actions are occurrence of events once the conditions are met. Running a playbook, setting a fact, running a module etc.

Small example Project

Here is a small example of Event Driven Ansible and how it is run. The idea is on receiving of a message (here the number 42) a playbook will run in the host. There are the following 3 files :

demo_rule.yml

---
- name: Listen for events on a webhook
  hosts: all

  sources:
    - ansible.eda.webhook:
        host: 0.0.0.0
        port: 8000

  rules:
    - name: Say thank you
      condition: event.payload.message == "42"
      action:
        run_playbook:
          name: demo.yml

This is the rulebook. We are using the webhook plugin here as the event source. As a rule in the event of receiving the message 42 as json payload in the webhook, we run the playbook called demo.yml

demo.yml

- hosts: localhost
  connection: local
  tasks:
    - debug:
        msg: "Thank you for the answer."

demo.yml, the playbook which run on the occurrence of the event mentioned in the rulebook and prints a debug message.

---
local:
  hosts:
    localhost

inventory.yml mentions the hosts to run the action against.

Further there are 2 files to one to test 42.json and 43.json to test the code.

{
  "message" : "42"
}
{
  "message" : "43"
}

First we have to install all related dependencies before we can run the rulebook.

$ python -m venv .venv
$ source .venv/bin/activate
$ python -m pip install ansible ansible-rulebook ansible-runner psycopg
$ ansible-galaxy collection install ansible.eda
$ ansible-rulebook --rulebook demo_rule.yml -i inventory.yml --verbose

Go to another terminal and on the same directory path and run the following command to test the Rulebook. After receiving the message, the playbook runs.

curl -X POST -H "Content-Type: application/json" -d @42.json 127.0.0.1:8000/endpoint

Output

2024-06-07 16:48:53,868 - ansible_rulebook.app - INFO - Starting sources
2024-06-07 16:48:53,868 - ansible_rulebook.app - INFO - Starting rules

...

TASK [debug] *******************************************************************
ok: [localhost] => {
    "msg": "Thank you for the answer."
}

PLAY RECAP *********************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
2024-06-07 16:50:08,224 - ansible_rulebook.action.runner - INFO - Ansible runner Queue task cancelled
2024-06-07 16:50:08,225 - ansible_rulebook.action.run_playbook - INFO - Ansible runner rc: 0, status: successful

Now if we run the other json file 43.json we see that the playbook does not run even after the http status code being 200.

curl -X POST -H "Content-Type: application/json" -d @43.json 127.0.0.1:8000/endpoint

Output :

2024-06-07 18:20:37,633 - aiohttp.access - INFO - 127.0.0.1 [07/Jun/2024:17:20:37 +0100] "POST /endpoint HTTP/1.1" 200 159 "-" "curl/8.2.1"


You can try this yourself follwoing this git repository.

by Anwesha Das at June 07, 2024 06:02 PM

A Tragic Collision: Lessons from the Pune Porsche Accident

I’m writing a blog after a very long time, as I kept procrastinating, but today I decided to write about something important and yes, it is a hot topic in the country right now. In Pune, a 17-year-old boy was driving a Porsche while under the influence of alcohol. As I read in the news, he was speeding, and while speeding, his car hit a two-wheeler vehicle, resulting in the death of two young people who were techies.
June 03, 2024 11:39 AM

Test container image with eercheck

Execution Environments serves us the benefits of containerization by solving the issues such as software dependencies, portability. Ansible Execution Environment are Ansible control nodes packaged as container images. There are two kinds of Ansible execution environments

  • Base, includes the following

    • fedora base image
    • ansible core
    • ansible collections : The following set of collections
      ansible.posix
      ansible.utils
      ansible.windows
  • Minimal, includes the following

    • fedora base image
    • ansible core

I have been the release manager for Ansible Execution Environments. After building the images I perform certain steps of tests to check if the versions of different components of the newly built correct or not. So I wrote eercheck to ease the steps of tests.

What is eercheck?

eercheck is a command line tool to test Ansible community execution environment before release. It uses podman py to connect and work with the podman container image, and Python unittest for testing the containers.

eercheck is a command line tool to test Ansible Community Execution Environment before release. It uses podman-py to connect and work with the podman container image, and Python unittest for testing the containers. The project is licensed under GPL-3.0-or-later.

How to use eercheck?

Activate the virtual environment in the working directory.

python3 -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements.txt

Activate the podman socket.

systemctl start podman.socket --user

Update vars.json with correct version numbers.Pick the correct versions of the Ansible Collections from the .deps file of the corresponding Ansible community package release. For example for 9.4.0 the Collection versions can be found in here. You can find the appropriate version of Ansible Community Package here. The check needs to be carried out each time before the release of the Ansible Community Execution Environment.

Execute the program by giving the correct container image id.

./containertest.py image_id

Happy automating.

by Anwesha Das at April 08, 2024 02:25 PM

Making my first OnionShare release

One of the biggest bottlenecks in maintaining the OnionShare desktop application has been packaging and releasing the tool. Since OnionShare is a cross platform tool, we need to ensure that release works in most different desktop Operating Systems. To know more about the pain that goes through in making an OnionShare release, read the blogs[1][2][3] that Micah Lee wrote on this topic.

However, one other big bottleneck in our release process apart from all the technical difficulties is that Micah has always been the one making the releases, and even though the other maintainers are aware of process, we have never actually made a release. Hence, to mitigate that, we decided that I will be making the OnionShare 2.6.1 release.

PS: Since Micah has written pretty detailed blogs with code snippets, I am not going to include much code snippets (unless I made significant changes) to not lengthen this already long code further. I am going to keep this blog more like a narrative of my experience.

Getting the hardwares ready

Firstly, given the threat model of OnionShare, we decided that it is always good to have a clean machine to do the OnionShare release works, especially the signing part of things. Micah has already automated a lot of the release process using GitHub Actions over the years, but we still need to build the Apple Silicon versions of OnionShare manually and then merge with the Intel version to create a univeral2 app bundle.

Also, in general, it's a good practise to have and use the signing keys in a clean machine for a projective as sensitive as OnionShare that is used by people with high threat models. So I decided to get a new Macbook for the same. This would help me build the Apple Silicon version as well as sign the packages for the other Operating Systems.

Also, I received the HARICA signing keys from Glenn Sorrentino that is needed for signing the Windows releases.

Fixing the bugs, merging the PRs

After the 2.6.1-dev release was created, we noticed some bugs that we wanted to fix before making the 2.6.1. We fixed, reviewed and merged most of those bugs. Also, there were few older PRs and documentation changes from contributors that I wanted to be merged before making the release.

Translations

Localization is an important part of OnionShare since it enables users to use OnionShare in the language they are most comfortable with. There were quite some translation PRs. Also, emmapeel2 who always helps us with weblate wizardry, made certain changes in the setup, which I also wanted to include in this release.

After creating the release PR, I also need to check which languages are greater than 90% translated, and make a push to hopefully making some more languages pass that threshold, and finally make the OnionShare release with only the languages that cross that threshold.

Making the Release PR

And, then I started making the release PR. I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.

Updating dependencies in Snapcraft

Updating the poetry dependencies went pretty smoothly.

There was nothing much to update in the pluggable transport scripts as well.

But then I started updating and packaging for Snapcraft and Flatpak. Updating tor versions to the latest went pretty smoothly. In snapcraft, the python dependencies needed to be compared manually with the pyproject.toml. I definitely feel like we should automate this process in future, but for now, it wasn't too bad.

But trying to build snap with snapcraft locally just was not working for me in my system. I kept getting lxd errors that I was not fully sure what to do about. I decided to move ahead with flatpak packaging and wait to discuss the snapcraft issue with Micah later. I was satisfied that at least it was building through GitHub Actions.

Updating dependencies in Flatpak

Even though I read about the hardship that Micah had to go through with updating pluggable transports and python dependencies in flatpak packaging, I didn't learn my lesson. I decided, let's give it a try. I tried updating the pluggable transports and faced the same issue that Micah did. I tried modifying the tool, even manually updating the commits, but something or the other failed.

Then, I moved on to updating the python dependencies for flatpak. The generator code that Micah wrote for desktops worked perfectly, but the cli gave me pain. The format in which the dependencies were getting generated and the existing formats were not matching. And I didn't want to be too brave and change the format, since flatpak isn't my area of expertise. But, python kind of is. So I decided to check if I can update the flatpak-poetry-generator.py files to work. And I managed to fix that!

That helped me update the dependencies in flatpak.

MacOS and Windows Signing fun!

Creating Apple Silicon app bundle

As mentioned before, we still need to create an Apple Silicon bundle and then merge it with the Intel build generated from CI to get the universal2 app bundle. Before doing that, need to install the poetry dependencies, tor dependencies and the pluggable transport dependencies.

And I hit an issue again: our get-tor.py script is not working.

The script failed to verify the Tor Browser version that we were downloading. This has happened before, and I kind of doubted that Tor PGP script must have expired. I tried verifying manually and seems like that was the case. The subkey used for signing had expired. So I downloaded the new Tor Browser Developers signing keys, created a PR, and seems like I could download tor now.

Once that was done, I just needed to run:

/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac
rm -rf build/OnionShare.app/Contents/Resources/lib
mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build

And amazingly, it built successfully in the very first try! That was easy! Now I just need to merge the Intel app bundle and the Silicon app bundle and everything should work (Spoiler alert: It doesn't!).

Once the app bundle was created, it was time to sign and notarize. However, the process was a little difficult for me to do since Micah had previously used an individual account. So I passed on the universal2 bundle to him and moved on to signing work in Windows.

Signing the Windows package

I had to boot into my Windows 11 VM to finish the signing and making the windows release. Since this was the first time I was doing the release, I had to first get my VM ready by installing all the dependencies needed for signing and packaging. I am not super familiar with Windows development environment so had to figure out adding PATH and other such things to make all the dependencies work. The next thing to do was setting up the HARICA smart card.

Setting up the HARICA smart card

Thankfully, Micah had already done this before so he was able to help me out a bit. I had to log into the control panel, download and import certificates to my smart card and change the token password and administrator password for my smart card. Apart from the UI of the SafeNet client not being the best, everything else went mostly smoothly.

Since Micah had already made some changes to fix the code signing and packaging stuff, it went pretty smooth for me and I didn't face much obstructions. Science & Design, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare and hence the package now gets signed under the name of Science and Design Inc.

Meanwhile, Micah had got back to me saying that the universal2 bundle didn't work.

So, the Apple Silicon bundle didn't work

One of the mistakes that I made was I didn't test my Apple Silicon build. I thought I will test it once it is signed and notarized. However, Micah confirmed that even after signing and notarizing, the universal2 build is not working. It kept giving segmentation fault. Time to get back to debugging.

Downgrading cx-freeze to 6.15.9

The first thought that came to my mind was, Micah had made a dev build in October 2023. So the cx-freeze release from that time should still be building correctly. So I decided to try and do build (instead of bdist_mac) with the cx-freeze version at that time (which was 6.15.9) and check if the binary created works. And thankfully, that did work. I tried with 6.15.10 and it didn't. So I decided to stick to 6.15.9.

So let's try now running bdist_mac, create a .app bundle and hopefully everything will work perfectly! But nope! The command failed with:

OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory

So now I had a decision to make, should I try to monkey-patch this and just figure out how to fix this or try to make the latest cx-freeze work. I decided to give the latest cx-freeze (version 6.15.15) another try.

Trying zip_include_packages

So, one thing I noticed we were doing differently than what cx-freeze documentation and examples for PySide6 mentioned was we put our dependencies in packages, instead of zip_include_packages in the setup options.

    "build_exe": {
        "packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I thought, let's try moving all of the depencies into zip_include_packages from packages. Basically zip_include_packages includes the dependencies in the zip file, whereas packages place them in the file system and not the zip file. My guess was, the Apple Silicon configuration of how a .app bundle should be structured has changed. So the new options looked something like this:

    "build_exe": {
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I created a build using that, ran the binary, and it gave an error. But I was happy, because it wasn't segmentation fault. The error mainly because it was not able to import some functions from onionshare_cli. So as a next step, I decided to move everything apart from onionshare and onionshare_cli to zip_include_packages. It looked something like this:

    "build_exe": {
        "packages": [
            "onionshare",
            "onionshare_cli",
        ],
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

This almost worked. Problem was, PySide 6.4 had changed how they deal with ENUMs and we were still using deprecated code. Now, fixing the deprecations would take a lot of time, so I decided to create an issue for the same and decided to deal with it after the release.

At this point, I was pretty frustrated, so I decided to do, what I didn't want to do. Just have both packages and zip_include_packages. So I did that, build the binary and it worked. I decided to make the .app bundle. It worked perfectly as well! Great!

I was a little worried that adding the dependencies in both packages and zip_include_packages might increase the size of the bundle, but surprisingle, it actually decreased the size compared to the dev build. So that's nice! I also realized that I don't need to replace the lib directory inside the .app bundle anymore. I ran the cleanup code, hit some FileNotFoundError, tried to find if the files were now in a different location, couldn't find them, decided to put them in a try-except block.

After that, I merged the silicon bundle with Intel bundle to create the universal2 bundle again, sent to Micah for signing, and seems like everything worked!

Creating PGP signature for all the builds

Now that we had all the build files ready, I tried installing and running them all, and seems like everything is working fine. Next, I needed to generate PGP signature for each of the build files and then create a GitHub release. However, Micah is the one who has always created signatures. So the options for us now were:

  • create an OnionShare GPG key that everyone uses
  • sign with my GPG and update documentations to reflect the same

The issue with creating a new OnionShare GPG key was distribution. The maintainers of OnionShare are spread across timezones and continents. So we decided to create signature with my GPG and update the documentation on how to verify the downloads.

Concluding the release

Once the signatures were done, the next steps were mostly straightforward:

  • Create a GitHub release
  • Publish onionshare-cli on PyPi
  • Push the build and signatures to the onionshare.org servers and update the website and docs
  • Create PRs in Flathub and Homebrew cask
  • Make the snapcraft edge to stable

The above went pretty smooth without much difficulty. Once everything was merged, it was time to make an announcement. Since Micah has been doing the announcements, we decided to stick with that for the release so that it reaches to more people.

February 29, 2024 12:41 PM

[2024] Hope - Dailies

This is accompanied with Hope

This is to maintain a daily journal of the efforts. This is to be seen from bottom to top.

To explain the jargons:

  • cw: current weight
  • gw: goal weight

Ok, lets start.

February 12, 2024

- cw: 82.6 kgs

Set on routine, I went to workout, and worked on functional movements with Yugal. I was able to do 5 pushups, and 20 knee pushups. Slowly building up the energy. Sleep was also proper, hit a 80+ mark, and was well rested. Onto the second week of tracking.

Food was still on track. I need to eat food on time. Also need to get a good diet plan and track macros.

Habit Tracking

- Food Habits: 2.5/5
- Water (320ml cup): 3.5L
- Exercise: 3/5
- Sleep Habit: 3.5/5

February 11, 2024

- cw: 83.0 kgs

February 10, 2024

- cw: 83.0 kgs

February 09, 2024

- cw: 83.5 kgs

February 08, 2024

- cw: 83.6 kgs

February 07, 2024

- cw: 83.7 kgs

A good day, I went to the gym in the morning, followed by a good meal. I still need to work on my sleep but overall I would rate the day good. I also seem to enter the weight from where the challenge would begin. Good that I set the initial goal weight to be 81kgs.

Here is the workout log:

Glute Bridges: 1 min x 2
Plank Hold: 1 min x 2
Tricep Back dips: 15 x 3
Bicep curls: 5kg x 15 x 3
Walking Lunges: 5kg x 10 x 3
Kettlebell Squats: 10kg x 10 x 4
Skips: a couple, learning the form from Jackson
Dumbell Bench Press: 5kg x 10 x 3 (again learning the form)

In terms of food, I did have a late breakfast after the workout, followed by a late lunch, which did include salad - but I made sure I had a early dinner.

Habit Tracking

- Food Habits: 3.5/5
- Water (320ml cup): 3L
- Exercise: 3/5
- Sleep Habit: 2/5

February 06, 2024

- cw: 83.9 kgs

An okay day of focus, too much out of pure motivation rather than will. I drank good enough water, but I need to drink more. Afternoon went for a workout session, which was moderate but scope of improvement. Food habits okay, as still trying to get into the regime.

Habit Tracking

- Food Habits: 3/5
- Water (320ml cup): 8
- Exercise: 2/5
- Sleep Habit: 3/5

February 05, 2024

- cw: 84.8 kgs

Comparatively, it’s been a good day. It’s been pure motivation at the moment, need to turn it around in consistency. I did not get to walk, but I’ve just started to track everything. Went to meet friends in the evening that rocked the schedule a bit.

Habit Tracking

- Food Habits: 3/5
- Water (320ml cup): 6
- Exercise: None
- Sleep Habit: 3/5

February 04, 2024

- cw: 85.9 kgs

Hello 2024!

2023 surely hasn’t been a good year in terms of staying healthy. I’ve been eating a lot, and that shows. I’ve gained 15 kgs in a just a span of a year, and I don’t feel well. I’ve a trek towards the end of the year and I need to start working out and feel stronger. Be light and swift!

February 05, 2024 12:00 AM

2024 - Hope

A thread to lose weight, and see a healthier version of me, v2024.6?. This is inspired from closely following Priyanka and Jason

16 weeks until June, Let’s see how I fare. I started a similar challenge in 2021, but motivations then and now are quite different.

To explain the jargons:

  • hw: highest weight
  • sw: starting weight
  • cw: current weight
  • gw: goal weight

The log will be updated weekly with the latest first. Incase you want to start from the beginning, jump here

For daily updates, check the daily fitness log

Ok, lets start.

- hw: 85.9 kgs (05/02/2024)
- sw: 85.9 kgs (05/02/2024)

February 12, 2024

- cw: 83.9 kgs (12/02/2024)

- gw0: 81 kgs
- gw1: 78 kgs
- gw2: 75 kgs
- gw3: 73 kgs
- gw4: 70 kgs

Good first week of tracking, I tried to eat healthy, workout and move. I failed bunch of times but I kept moved ahead and looking to the next day. Starting few days the drop was quite huge. Maybe water weight from the past days, but then it stabilized at 83.7 kgs, so all the drop of after that is what I lost. First, goal weight is still a bit far hopefully in next 2-3 weeks.

February 05, 2024

- cw: 85.9 kgs (05/02/2024)

- gw0: 81 kgs
- gw1: 78 kgs
- gw2: 75 kgs
- gw3: 73 kgs
- gw4: 70 kgs

Hello 2024!

2023 surely hasn’t been a good year in terms of staying healthy. I’ve been eating a lot, and that shows. I’ve gained 15 kgs in a just a span of a year, and I don’t feel well. I’ve a trek towards the end of the year and I need to start working out and feel stronger. Be light and swift!

February 05, 2024 12:00 AM

New Blog

New Blog

This is beginning of my new blog! While https://blog.araj.me was previously running on Ghost as well, this is a new install. Primarily, because i couldn&apost easily get the data back from my previous ghost install. It still lives in a mysql instance, so old posts might appear on this instance too if I feel like it at some point.

What am I going to write about? I&aposve been working a lot on my homelab setup so that is probably going to be the starting point. I have also been trying out OpenWRT for my router (running on an Edgerouter X, who could&aposve thought it can run with 95% space available and over 65% free memory) and struggling to re-configure VLANs to segregate my homelab, "regular internet" for my wife and guests and IoT stuff. Setting up VLANs on OpenWRT was not fun, I took down the internet a couple of times, which wasn&apost appreciated at home. So, I ended up flashing another old TP-Link router I had to learn OpenWRT so I try out settings there before I apply it to main router.

My Homelab currently runs on an Intel NUC 10 i7 (6C12T, 16G RAM), which has been plenty for my current use cases. I&aposve over-provisioned it with Proxmox VE as the hypervisor of choice. I am using an actual hypervisor based setup for the first time and there is no going back now! For some reason, I tried out Xcp-ng as well but with XOA, I couldn&apost figure out how to do some stuff, so that setup is currently turned off. Maybe I&aposll dust it off again at some point. I do have 2 more nodes in the standby to run more things, but that&aposll probably happen once I shift to my new house (hopefully soon!).

by Abhilash Raj at January 10, 2024 05:44 PM

Safeguarding Our Digital Lives: As Prevention is Better than the Cure

Today, I stumbled upon some deeply concerning news regarding the unauthorized leak of private pictures belonging to a 16-year-old girl from her online account. This incident serves as a stark reminder of the risks we face in the digital world. We must exercise caution and thoughtfulness when sharing anything online, as once something is uploaded, it can be extremely challenging and almost impossible to completely remove it. Almost all of us know the trouble we have to go to remove our own pictures from fake profiles from social media and their customer support is nearly non-existent.
July 10, 2023 12:00 AM

Upgrading Kubernetes Cluster

June 08, 2023


Disclaimer:

Just trying to document the process (strictly) for me.

This documentation is just for educational purpose.

The process won’t follow for any production cluster!

Aim

To upgrade a Kubernetes cluster with nodes running Kubernetes Version v1.26.4 to v1.27.2.
I’m using a Kubernetes cluster created using Kind, for the example sake.

STEP 1 — Create a kind Kubernetes cluster

Use the following kind-config.yaml file:

# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.26.4@sha256:f4c0d87be03d6bea69f5e5dc0adb678bb498a190ee5c38422bf751541cebe92e
- role: worker
  image: kindest/node:v1.26.4@sha256:f4c0d87be03d6bea69f5e5dc0adb678bb498a190ee5c38422bf751541cebe92e

Please note:

  • The above config file for Kind cluster will create a Kubernetes cluster with 2 nodes:
    • Control Plane Node (name: kind-control-plane) Kubernetes Version: v1.26.4
    • Worker Node (name: kind-worker) Kubernetes Version: v1.26.4

Run the following command to create the cluster:

$ kind create cluster --config kind-config.yaml 
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.26.4) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Verify if cluster came up successfully:

$ kubectl get nodes -o wide

NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   2m43s   v1.26.4   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready    <none>          2m25s   v1.26.4   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

Note that the version of both nodes is currently v1.26.4


STEP 2 — Upgrade the control plane node

Exec inside the docker container corresponding to the control plane node (kind-control-plane):

$ docker exec -it kind-control-plane bash

root@kind-control-plane:/# 

Install the utility packages:

root@kind-control-plane:/# apt-get update && apt-get install -y apt-transport-https curl gnupg
root@kind-control-plane:/# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kind-control-plane:/# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kind-control-plane:/# apt-get update

Check which version to upgrade to (in our case, we’re checking if v1.27.2 is available)

root@kind-control-plane:/# apt-cache madison kubeadm

  kubeadm |  1.27.2-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.27.1-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.27.0-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.26.5-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.26.4-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  ...

Upgrade Kubeadm to the required version:

root@kind-control-plane:/# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.27.2-00 && apt-mark hold kubeadm

...
Setting up kubeadm (1.27.2-00) ...

Configuration file '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer\'s version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** 10-kubeadm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ...
kubeadm set on hold.
...

root@kind-control-plane:/# kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:18:49Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}

Check & Verify the Kubeadm upgrade plan:

root@kind-control-plane:/# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.26.4
[upgrade/versions] kubeadm version: v1.27.2
[upgrade/versions] Target version: v1.27.2
[upgrade/versions] Latest version in the v1.26 series: v1.26.5
W0608 12:57:04.800282    5535 compute.go:307] [upgrade/versions] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     1 x v1.26.4   v1.26.5
            1 x v1.27.2   v1.26.5

Upgrade to the latest version in the v1.26 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.26.4   v1.26.5
kube-controller-manager   v1.26.4   v1.26.5
kube-scheduler            v1.26.4   v1.26.5
kube-proxy                v1.26.4   v1.26.5
CoreDNS                   v1.9.3    v1.10.1
etcd                      3.5.6-0   3.5.7-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.26.5

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     1 x v1.26.4   v1.27.2
            1 x v1.27.2   v1.27.2

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.26.4   v1.27.2
kube-controller-manager   v1.26.4   v1.27.2
kube-scheduler            v1.26.4   v1.27.2
kube-proxy                v1.26.4   v1.27.2
CoreDNS                   v1.9.3    v1.10.1
etcd                      3.5.6-0   3.5.7-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.27.2

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

We will upgrade to the late stable version (v1.27.2):

root@kind-control-plane:/# kubeadm upgrade apply v1.27.2

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.27.2"
[upgrade/versions] Cluster version: v1.26.4
[upgrade/versions] kubeadm version: v1.27.2
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
W0608 12:59:23.499649    5571 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0608 13:00:07.900906    5571 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.7" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.27.2" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
W0608 13:00:48.303106    5571 staticpods.go:305] [upgrade/etcd] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0608 13:00:48.305410    5571 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests56128700"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2613181160/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.27.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

I’m skipping upgrading the CNI (I don’t have any additional CNI provider plugin, other than what defaults to Kind cluster ~ kindnet)

But if you need to check how kindnet is working, do following inside the control plane node:

root@kind-control-plane:/# crictl ps

...
5715f2f6e401c       b0b1fa0f58c6e       8 minutes ago       Running             kindnet-cni               2                   3d78434184edf       kindnet-blltq
...
root@kind-control-plane:/# crictl logs 5715f2f6e401c   
I0608 13:02:38.079089       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:38.080550       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:38.080592       1 main.go:93] apiserver not reachable, attempt 0 ... retrying
I0608 13:02:38.080600       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:38.081047       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:38.081072       1 main.go:93] apiserver not reachable, attempt 1 ... retrying
I0608 13:02:39.081260       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:39.082375       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:39.082405       1 main.go:93] apiserver not reachable, attempt 2 ... retrying
I0608 13:02:41.082727       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:41.083924       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:41.083963       1 main.go:93] apiserver not reachable, attempt 3 ... retrying
I0608 13:02:44.085510       1 main.go:316] probe TCP address kind-control-plane:6443
I0608 13:02:44.088241       1 main.go:102] connected to apiserver: https://kind-control-plane:6443
I0608 13:02:44.088270       1 main.go:107] hostIP = 172.18.0.3
podIP = 172.18.0.3
I0608 13:02:44.088459       1 main.go:116] setting mtu 1500 for CNI 
I0608 13:02:44.088536       1 main.go:146] kindnetd IP family: "ipv4"
I0608 13:02:44.088559       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
I0608 13:02:44.278193       1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]
I0608 13:02:44.278210       1 main.go:227] handling current node
I0608 13:02:44.280741       1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]
I0608 13:02:44.280753       1 main.go:250] Node kind-worker has CIDR [10.244.1.0/24] 
I0608 13:02:54.293198       1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]

Now, before we go and upgrade the kubelet & kubectl (& restart the services),

Open a new terminal (outside the docker exec) and mark the node unschedulable (cordon) and then evict the workload (drain)

# Outside the docker exec terminal
$ kubectl drain kind-control-plane --ignore-daemonsets

node/kind-control-plane cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-blltq, kube-system/kube-proxy-rfbd5
evicting pod local-path-storage/local-path-provisioner-6bd6454576-xlvmc
pod/local-path-provisioner-6bd6454576-xlvmc evicted
node/kind-control-plane drained

$ kubectl get nodes -o wide

NAME                 STATUS                     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready,SchedulingDisabled   control-plane   47m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready                      <none>          47m   v1.26.4   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

Now, come back to the former terminal with the docker exec (into control-plane node):

And upgrade the kubelet and kubectl:

root@kind-control-plane:/# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.27.2-00 kubectl=1.27.2-00 && apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:2 http://deb.debian.org/debian bullseye InRelease
Hit:3 http://deb.debian.org/debian-security bullseye-security InRelease             
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease                       
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease             
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
kubectl is already the newest version (1.27.2-00).
kubectl set to manually installed.
kubelet is already the newest version (1.27.2-00).
kubelet set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
kubelet set on hold.

And now restart the kubelet:

root@kind-control-plane:/# systemctl daemon-reload
root@kind-control-plane:/# systemctl restart kubelet

And now go back to the other terminal outside the docker exec, and uncordon the node:

$ kubectl uncordon kind-control-plane

node/kind-control-plane uncordoned

And that’s everything for the control plane upgrade! Just check at last if it is running properly!

$ kubectl get nodes -o wide

NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   52m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready    <none>          51m   v1.26.4   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

And don’t forget to exit from the docker exec terminal (kind-control-plane):

root@kind-control-plane:/# exit
exit

STEP 3 — Upgrade the worker node

Exec inside the docker container corresponding to the worker node (kind-worker):

$ docker exec -it kind-worker bash
root@kind-worker:/# 

Install the utility packages:

root@kind-worker:/#  apt-get update && apt-get install -y apt-transport-https curl gnupg
root@kind-worker:/#  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kind-worker:/#  cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kind-worker:/#  apt-get update

Upgrade Kubeadm to the required version:

root@kind-worker:/# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.27.2-00 && apt-mark hold kubeadm

...
Setting up kubeadm (1.27.2-00) ...

Configuration file '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer\'s version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** 10-kubeadm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ...
kubeadm set on hold.

Run kubeadm upgrade (For worker nodes this upgrades the local kubelet configuration):

root@kind-worker:/# kubeadm upgrade node

[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2909228160/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

Now, before we go and upgrade the kubelet & kubectl (& restart the services),

Open a new terminal (outside the docker exec of kind-worker container) and mark the node unschedulable (cordon) and then evict the workload (drain)

# Outside the docker exec terminal
$ kubectl drain kind-worker --ignore-daemonsets

node/kind-worker cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qpx8l, kube-system/kube-proxy-5xf5d
evicting pod local-path-storage/local-path-provisioner-6bd6454576-km824
evicting pod kube-system/coredns-5d78c9869d-mvgjq
evicting pod kube-system/coredns-5d78c9869d-zrmm4
pod/coredns-5d78c9869d-mvgjq evicted
pod/coredns-5d78c9869d-zrmm4 evicted
pod/local-path-provisioner-6bd6454576-km824 evicted
node/kind-worker drained

$ kubectl get nodes -o wide
NAME                 STATUS                     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready                      control-plane   62m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready,SchedulingDisabled   <none>          61m   v1.27.2   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

Now, come back to the former terminal with the docker exec (into kind-worker container):

And upgrade the kubelet and kubectl:

root@kind-control-plane:/# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.27.2-00 kubectl=1.27.2-00 && apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:2 http://deb.debian.org/debian bullseye InRelease            
Hit:3 http://deb.debian.org/debian-security bullseye-security InRelease
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease                      
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease            
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
kubectl is already the newest version (1.27.2-00).
kubectl set to manually installed.
kubelet is already the newest version (1.27.2-00).
kubelet set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
kubelet set on hold.
kubectl set on hold.

And now restart the kubelet:

root@kind-worker:/# systemctl daemon-reload
root@kind-worker:/# systemctl restart kubelet

And now go back to the other terminal outside the docker exec, and uncordon the node:

$ kubectl uncordon kind-worker

node/kind-worker uncordoned

And that’s everything for the worker node upgrade! Just check at last if it is running properly!

$ kubectl get nodes -o wide
NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   67m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready    <none>          66m   v1.27.2   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

And don’t forget to exit from the docker exec terminal (kind-worker):

root@kind-worker:/# exit
exit

With that both our nodes are now successfully upgraded from Kubernetes version v1.26.4 to v1.27.2


References:

June 08, 2023 12:00 AM

CSS: Combinators

In CSS, combinators are used to select content by combining selectors in specific relationships. There are different types of relationships that can be used to combine selectors.

Descendant combinator

The descendant combinator is represented by a space “ ” and typically used between two selectors. It selects the second selector if the first selector is the ancestor (parent, parent parent's) element. These selectors are called the descendant selectors.

.cover p {
    color: red;
}
<div class="cover"><p>Text in .cover</p></div>
<p>Text not in .cover</p>

In this example, the text “Text in .cover” will be displayed in red.

Child combinators

The child combinator is represented by “>” and is used between two selectors. In this, an element is only selected if the second selector is the direct child of the first selector element. This means there should not be any other selector between the first selector element and second element selector.

ul > li {
    border-top: 5px solid red;
} 
<ul>
    <li>Unordered item</li>
    <li>Unordered item
        <ol>
            <li>Item 1</li>
            <li>Item 2</li>
        </ol>
    </li>
</ul>

In this example, the <li> element with the text “Unordered item” will have a red top border.

Adjacent sibling combinator

The adjacent sibling combinator is represented by “+” is placed between the two CSS selector. In this element is selected if the selector element is directly followed by the first element selector or only the adjacent sibling

h1 + span {
    font-weight: bold;
    background-color: #333;
    color: #fff;
    padding: .5em;
}
<div>
    <h1>A heading</h1>
    <span>Veggies es bonus vobis, proinde vos postulo essum magis kohlrabi welsh onion daikon amaranth tatsoi tomatillo
            melon azuki bean garlic.</span>

    <span>Gumbo beet greens corn soko endive gumbo gourd. Parsley shallot courgette tatsoi pea sprouts fava bean collard
            greens dandelion okra wakame tomato. Dandelion cucumber earthnut pea peanut soko zucchini.</span>
</div>

In this example, the first element will have the given CSS properties.

General sibling combinator

The general sibling combinator is represented by “~“. It selects all the sibling element, not only the direct sibling element, then we use the general sibling combinator.

h1 ~ h2 {
    font-weight: bold;
    background-color: #333;
    color: #fff;
    padding: .5em;
}
<article>
    <h1>A heading</h1>
    <h2>I am a paragraph.</h2>
    <div>I am a div</div>
    <h2>I am another paragraph.</h2>
</article>

In this example, every <h2> element will have the given CSS properties.

CSS combinators provide powerful ways to select and style content based on their relationships in the HTML structure. By understanding combinators, we can create clean, maintainable, and responsive web designs.

Cheers!

ReferencesMDN Web Docs

#CSS #Combinators #WebDevelopment #FrontendDev

May 28, 2023 02:42 PM

Understanding massive ZeroDay impacting Dogecoin and 280+ networks including Litecoin and Zcash

Halborn discovered a massive #ZeroDay vulnerability code-named Rab13s impacting Dogecoin and 280+ networks, including Litecoin and Zcash, putting over $25 Billion of digital assets at risk. To understand the zero-day vulnerability Rab13s, we need to go through some basic concepts. So I would like to explain blockchain and its key characteristics. Blockchain is a data structure used to represent a cryptocurrency. Stores data in a way that allows multiple parties to access it reliably without having to trust one another.
May 23, 2023 12:00 AM

What is stopping us from using free software?

I had a funny day yesterday

I'll start with the evening, that I spend as tutor in the RoboLab; a workshop for kids aged 10-18 to build their own robot with some 3d printed parts, an ESP and everything you need of electric equipment to make some wheels move. It's a great project and I have much respect for the people who initiated and still maintain it in their free time with the children.

The space we can use for the project is called Digitallabor (digital lab) and offers anything you would want from a well equipped maker space, including a shelf full of laptops to use while you're working there.

I should not be surprised anymore, but I can't help it

Of course, all laptops run Windows. I picked one, booted up and and saw the fully bloated and ad-loaded standard installation of Windows 10. Last time a search for updates had been performed: early 2019. No customized privacy settings, nothing. Just the standard installation in all its ugliness.

I asked the people running the space why. Why? As this would be the perfect place to introduce the children to free software and even shed some light upon the difference between Free and Open Source Software and proprietary, user despising spyware (of course I did ask in a somewhat more diplomatic manner).

The answers: "The children are used to it.", "It's easier to maintain."

Yes. So last search for updates 2019. That's well maintained.

Regarding the "the children are used to it": I can confirm that children don't give a shit. If it runs Minetest then it's fine. If they have access to a PC or laptop at home at all, because in my experience most of the kids nowadays have exactly two digital media skills anyway: tapping and swiping. So this would be the perfect place to introduce them to free alternatives!

The morning was different

We're a small company with only ~12 employees, most of which are rather non-technical. So there is no IT department. Or in other words: I am the IT department. And our IT department finds it is no longer responsible to run Windows on business PCs (at least in the world outside the US). So yesterday I prepared a new PC with Fedora 38, brought it to my colleagues and asked: "Who dares to try this Linux Desktop?"

Guess who stepped forward instantly and said, "I can do that"? My ~60 year old colleage who was a medical technical assistent when I wasn't even born and a life-long Windows user. We did the initiall configuration, synced her mails and calendars, set-up printers and network drives and went through the most important peculiarities of the GNOME3 desktop. It took about 90 minutes and then she said "I guess I'm fine from here. I'll play around with this a bit to get used to the new apps". I promised her first-level support but she was working without any issues the whole day.

I'm really proud of her

So many people keep telling me it would be too hard, too much reorientation to switch operating systems, but moments like that show me that the problem may lie somewhere else. People are afraid of changes. People want to spare the effort. But I think that daring to make a change instead of doing nothing despite better knowledge will be rewarded. The next desktop PC is already prepared, so next week I will ask the question again :)

by Robin Schubert at April 28, 2023 12:00 AM

Google Open Source Peer Bonus Award 2023

I am honored to be a recipient of the Google Open Source Peer Bonus 2023. Thank you Rick Viscomi for nominating me for my work with the Web Almanac 2022 project. I was the author of Security and Accessibility chapters of the Web Almanac 2022.

Google Open Source Peer Bonus 2023 Letter. Dated April 19, 2023. Dear Saptak Sengupta, On behalf of Google Open Source, I would like to thank you for your contribution to 2022 Web Almanac. We are honored to present you with a Google Open Source Peer Bonus. Inside the company, Googlers can give a similar bonus to each other for going above and beyond, so this is just a small way of saying thank you for your hard work and contributions to open source. We hope you enjoy this gift from all of us at Google and Rick Viscomi who nominated you. Thank you again for supporting open source! We look forward to your continued contributions. Best regards, Chris DiBona, Director of Google Open Source

For the last year, I have started to spend more time in contributing, maintaining and creating Open Source project and reduced the amount of contracts I usually would do. So this letter of appreciated feels great and helps me get an additional boost in continuing to do Open Source Projects.

Some of the other Open Source projects that I have been contributing and trying to spend more time on are:

In case someone is interested in supporting me to continue doing open source projects focused towards security, privacy and accessibility, I also created a GitHub Sponsors account.

April 20, 2023 07:32 AM

Converting HTML Tables to CSV

Today, I decided to analyze my bank account statement by downloading it from the day I opened my bank account. To my surprise, it was presented as a web page. Initially, my inner developer urged me to write code to scrape that data. However, feeling a bit lazy, I postponed doing so.

Later in the evening, I searched the web to find an alternate way to extract the data and discovered that HTML tables can be converted to CSV files. All I had to do was save the code in CSV format. I opened the Chrome browser's inspect code feature, copied the table, saved it with the CSV extension, and then opened the file with LibreOffice. Voila! I had the spreadsheet with all my transactions.

Cheers!

#TIL #CSV #HTML Table

April 15, 2023 05:41 PM

Mastering Async Communication in a Remote World

This is one of my favorite posts/documents I have written. I wrote it during the pandemic (2020–21), when InfraCloud, the organization I work with, decided to go fully remote. It was published at infracloud.io on 11th April 2023: Mastering Async Communication in a Remote World. As a remote first organization, we encourage everyone to follow asynchronous communication while working with our peers and customers at InfraCloud. This article about writing better messages is directly from our internal handbook.
by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at April 10, 2023 06:30 PM

Subscriptions

Planetorium