Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

Fosdem 101

&aposFosdem ain&apost like any other conference.&apos "Ahh, such an overrated statement, I thought to myself." I have been attending and organizing events for years now. "I attended PyCon US with 3500+ attendees and organized a conference with 1000+ people; most importantly, I come from India; nothing can break me." It was such a grave (with G ) mistake on my part. I was completely clueless and lost in the event. I decided to put down the reference notes before going to Fosdem next time.

The preparation should be twofold. One, before traveling, two, the day before the conference, and third, during the conference.

Before traveling

Flu shot

Fosdem is infamous for its flu. Trust me, I and many of my team members got it, and it&aposs nasty. I didn&apost get the flu shot and suffered (and how). Get your flu shots early enough to be effective during the conference.

Prior appointments

The conference is enormous. Trillion activities are going on at the same time. So unlike other conferences, everyone only comes across one another despite spending two days at the same venue. So, if you want to meet people, make plans way early. Fix the appointment (date, time, and exact location)

Fringe Events

There are many events in and around the conference days in Belgium. Such as CentOS Connect, the EU Open Source Policy Summit, Config Management Camp, etc. Look for such events to visit.

Hotel Booking

There needs to be more places to stay near the conference venue. The most convenient places are near the city center. Try to book the hotels in advance, especially in the city center. If you wait, there is a high likelihood you won&apost be able to get the place. Also, check the distance from the conference venue. Coordinate with your friends if you want to stay at the same hotel.

Perp before the day of the conference

Packing

We need to pack a water bottle and some food. You might need more time/energy or a chance to stand in the long queue for food. So better be ready with your stock.

Notes

There will be a lot of interesting discussions and new and old contacts that you will need help remembering. So you need to take a lot of notes. Take your notebook or pen, or charge the note-taking device.

Pack light

There will only be a few chances to sit, so pack light. If you do not want to have a sore back after the conference. Charge your laptop

harish_carol_and_me.jpg

The days of the conference

The conference venue is huge and needs to be clarified (at least for the first-timer). Find out where are rooms are beforehand since it is going to be long queues.

Start early

Start earlier than usual if you want to meet someone at a particular time. You will encounter some of your friends and get stopped from engaging in the conference.

Supplies

If you carry your cards, run a booth, or have any giveaways, slot them throughout the day so you stay supplied on the first go. However many you carry, you will finish the stuff.

Talks

The rooms get filled up very quickly. If you wish to attend any talks, make a list and try to get into the room at the very beginning of the schedule. Otherwise, you have very few chances to get inside the room.

deb_&_me.jpg

Hallway Tracks

Hallway tracks are the most exciting place to be in. All the intriguing discussions happen here, starting from technology, community, the legal, and policy, you name it. You will find your next best project idea. Remember the famous Pacman rule: leaving space for someone to join your group in a circle, like the Pacman logo. You may find the person you look up to, your mentor, a fellow developer to work with you on your project, or your friend. But one thing is for sure: you will find your community there.

So have a proper breakfast and run to the venue because "Fosdem ain&apost like any other conference."

by Anwesha Das at March 08, 2024 05:29 PM

CfgMgmtCamp 2024

Ansible Contributor Summit is one of Ansible&aposs most significant and undoubtedly crucial events, now renamed Ansible Colab. It occurs on the last day of Config Management Camp at Ghent. Traditionally, the conference takes place next Monday to Wednesday after Fosdem.

We started from Brussels in the afternoon on the last day of Fosdem. My most complicated part was to bid goodbye to my daughter, who was accompanying me at the Fosdem. She tried to convince me to take her by saying, "I will be helpful in the booth duty." Failing that proposition, she finally said bye with a note: "Help [Carol], she works a lot."
We reached Ghent in the early afternoon. Thank you, Boris, for giving us the ride and community help. After a walk around the city in the light festival and experiencing my first bubble tea, we reached the conference dinner. And how nice to meet [Florian] over there.

Days of the Conference

cfgmgmt24_0.jpeg

I traveled to this place with a pretty extensive list of agendas:

  • Finish working on some (long stuck) PRs
  • Plan on some of the technical decisions
  • Discuss some future community plans
  • Hacking on some issues
  • And, of course, a lot of meetings with my fellow community members

Most of us stayed in the same hotel, making travel more accessible and having exciting conversations on tram rides. We started for the venue early in the morning. We had a booth next to our friendly neighbor Foreman&aposs (once again after Fosdem). [Rick], [Greg], [Don], [Carol], and I shared the booth duties. And of there was [Tim] [Felix], and [David] helping us with the questions from our users and contributors. We had 2 Ansible tracks this year at the Config Management camp.

The talk lists

This time, again, I could only spend a little time attending the talks due to the booth duty (which I enjoy to the fullest) and the room moderator duty. The room duty allows us to listen to the talks on the best seat :). There is a list of talks that I could not get to attend but kept on my watch list :

  1. Terraforming with Ansible by Tim Appnel
  2. [Where does your Ansible code come from?] by Fabio Alessandro "Fale" Locati
  3. [Simplifying Cloud Deployments with Ansible for React Next.js on AWS EC2] by Rose Crisp
  4. We Fear Change by Coté
  5. [If Dev and Ops had a baby, it would be called Winglang] by D.Aud
  6. [Automating Hybrid Clouds with Event-Driven Ansible] by Ricardo Carrillo Cruz
  7. Ansible - State of the Community(https://cfp.cfgmgmtcamp.org/2024/talk/3ZD3WX/) by Greg Sutcliffe
  8. [Automating project documentation for the win] by Don Naro
  9. Unstructuring your mind: Ansible vs. JSON by Felix Frank
  10. Where does your Ansible code come from? by Fabio Alessandro "Fale" Locati

Ansible Colab Day

cfgmgmt24_1..jpeg

The last day of Config Management Camp was Ansible Colab Day. We started a little late. It was the day with all Ansible. The day started with [Sutapa] sharing her journey as she changed her career path from being a GIS professional to an Open source contributor with Ansible Community. She joined us online. Witnessing her journey and playing a small part in that felt really nice. Greg shared with us the state of the Ansible Community. Don shared with us what is going on with the Ansible Documentation. Felix, David, and I presented to the community the &aposPast, Present, and Future of the Ansible Release Management.&apos After lunch, we sat for a chatting and hacking session in a more informal session. And with that, we called an end to the conference.

Au revoir

cfgmgmt24_2..jpeg

After Five days of friends, festivals, and feasts, saying goodbye is always hard. I finished my todo list and filled my notepads with ideas and upcoming todos. I met some wonderful people and made some great friends. A special note of gratitude to the organizers for this fantastic event over the years. Until we make it the next time (I wish to do it soon), thank you for all your contributions, and see you online.

Moment to cherish

At one of the conference dinners. I introduced myself as a Red Hatter, and he said, "Ahh, Red Hat, you are the good ones." It made my day. &aposYes, we are good ones; we always strive to keep the trust and name going.&apos

by Anwesha Das at March 07, 2024 04:17 PM

Ursula Vernon on Unrealistic Expectations and Eating Bread



If you’re ever feeling guilty about not cooking a fresh home-cooked meal, a reminder that people in cities historically either had cooks or ate at food stalls, going back to Ancient Greece. Ancient Egypt, too, although since everybody ate bread, beer, and onions, less of a thing.

It’s a weird quirk of our obsession with nuclear families that everybody is expected to have time, skill, and equipment to cook daily and that if you’re a woman, particularly, you are a lesser person if you aren’t casually able to cook every day with random fresh ingredients.

Don’t buy into that. People since forever have hired cooks, gone to inns, lived in extended families where it wasn’t always your turn to cook, or ate such simplified diets that it was less of an issue.

You haven’t failed at a normal human task,
you have been sold an unrealistic expectation and told it was a normal human task.
Go get takeout.
Or beer, bread, and onions.
Eat cheese and some dates.
Relax.

Ursula Vernon via, Diane Duane



I stumbled across Ursula from her art.

Art courtesy, Ursula Vernon


And then I realised she is an accomplished children’s author


And she does not stop there, she writes Hugo award winning books for adults across genres; horror, fantasy and romance.1


And talk about making art from pain, she fucked cancer right in the eye!


Read all about her long journey here!
And if it does not inspire you, and make you shed a tear, I’ll eat my hat.


Feedback on this post? Mail me at feedback at this domain

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. I’d say she’s a female Neil Gaiman. But let’s face it. She’s far more talented and kicks ass. Gaiman is the male T Kingfisher! ↩︎

March 05, 2024 07:20 AM

Note to Self, Cal Newport’s Minimal Notes System


My old-style slow notetaking process.
Replaced now with Elipsa Annotations, which then move along with my thoughts into Org Roam Notes.
Click to see bigger


Cal Newport recently did a deep dive on his podcast, on a minimalist note taking system for various areas of your life.
Video’s on Youtube, if you want to watch. It’s called A Productivity System To Remember Everything You Learn.

It matches, what I’ve organically been doing all these years.
And I was really happy to see it structured and rendered so well.
So I wrote and paraphrased and jotted it all down below.
Note: These are my mangling of Cal’s words, not what he actually said. The audio/video above should give you exact specifics.


Why?

Minimal Friction

  • too much friction with other systems
  • friction in work that matters is ok, is good, is perfect. McPhee & Caro! But for most other things, not needed.
    • Cal goes on to verbally list what McPhee does, but my notes are better.
    • In a nutshell, friction slows him (McPhee) down in a way that’s beneficial, that lets him do his best work
  • Notetaking does not need such friction. When information is coming at you, your time to act + energy to think are limited
  • Friction stops you from taking action. That information could then be lost
    • For e.g. a complicated note taking process when reading a book, might stop you from getting into the book or ever keep you from reading the book altogether.
  • Your system should therefore reduce friction, so that you can capture as much information as possible quickly, efficiently, painlessly

Outsource your brain? No!

  • When it comes to things that matter, stuff should live in your brain. So that stuff percolates into your value systems and mental models and you evolve!
  • Your brain needs to be part of this curation process. Your brain needs to build hooks. To remember the big things. For accuracy and precision, details can always be looked up.

The System

For Books (the Corner Marking Method)

  • If there’s somethng interesting on the page, mark the corner fo the page (dog ear with pen) and then …
    • Mark up the thing that interest you! Simple marks in the margins. Put a box around text. Checkmarks next to a line. Curly braces next to a paragraph to remember
    • Occasionally write a short note, to help you remember the context, or to tell yourself what it reminds you of, or is there any other place where you’d find this useful?
  • Barely slows you down. Does not get in your way. Does not prevent you from reading
  • Then when you go back to your book, and look at your dog marks and then look at stuff you marked on those pages? You will in a few minutes, reconstitute all the ideas from that book.
  • Bonus: One “shortcoming”? You need to remember! Oh that book? That was the one that had the interesting ideas about such and such
    • That is not a problem. This is the one bit of friction that is useful. It allows you to use the gist of what you learned from the book in the schemas of knowledge that I am constructing and modifying and growing in my head. So that’s not really a bug, rather a feature. In English? It helps me build better Mental Models!
    • You become a better reader

Projects (Professional and Personal)

  • Where do notes relevant to a project go?
  • Store notes relevant to a project in the location where you will one day work the project.
    • I do this on a personal level too. Everything is where my mind says, I ought to look for them, not where they “ought to be.” Which is why my keys are next to my Minnie mouse plushie and my flash drives live in an old, soup bowl.
  • So when it comes time to do the work, everything that you’ve gather is all there for you to use.
  • Duplication is ok. (If there’s too much duplication, which happens rarely, I organically figure something out)
  • Bonus: Whenever you add something new to the pile, you always encounter your old stuff and it keeps refreshing a mental picture of the project in your mind and helps create new mental grooves

Ideas about Life? Your values, your inspiration, you want to do something with and in your life

  • Keep a fancy, awesome, cool, aspirational notebook!
  • A fancy pen you like
  • Basically whatever you used when you started your journey? You wrote notes, you made headlines, you had marginalia in your pages, you did everything by hand. Just use the classiest things you like and can afford, to give it heft and meaning.
  • Your life has few ideas, important as they are. Easy to keep track of them in a notebook
  • You want the form to matter.
  • Digital notebooks? They’re cool too, if you like them.
  • All you need is semi regular review and process
    • A good lazy way to do it is when the notebook fills up
    • Review and copy over the summaries of everything good and lasting and important
    • See what sticks


Feedback on this post? Mail me at feedback at this domain

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


March 02, 2024 11:05 AM

Iron Widow



art of various scenes between a young chinese couple. the art is all fantasy. with vibrant colours. top one shows a heartfelt conversation. middle one is a lovey dovey one. third one is them standing ready for battle in action poses

courtesy, Xiran Jay Zhao


What happens when you take all the pulpy goodness from Indian soaps and K-dramas, mash it up with Pacific Rim and Robotech, infuse it with a lot of authentic Chinese history, and aim it at teens (err, young adults)?

You get Iron Widow by Xiran Jay Zhao, that’s what!

I wish someone takes Indian history and creates such awesome fictional worlds from them.
This was such a crazy, rollicking ride! I cannot wait for their next.


My highlights, from the book

In hindsight, I was such a fool to have assumed Qieluo would stand by me just because she’s also female.
It was my grandmother who crushed my feet in half.
It was my mother who encouraged me and Big Sister to offer ourselves up as concubines so our brother could afford a future bride.
It was always the village aunties who’d sit around gossiping about which girl hadn’t been married off yet, despite complaining nonstop about their own husbands. And then they’d congratulate new mothers for being “blessed” to have a boy, despite being female themselves.

How do you take the fight out of half the population and render them willing slaves? You tell them they’re meant to do nothing but serve from the minute they’re born. You tell them they’re weak. You tell them they’re prey.
You tell them over and over, until it’s the only truth they’re capable of living.


Feedback on this post? Mail me at feedback at this domain

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


March 01, 2024 03:52 AM

Fosdem, 2024

"What, a technology conference has a legal and policy track! This is the conference for me to be in. " That was me in 2017. I dreamed of attending Fosdem but could not attend one (even though my talk was selected). 2024 marked my debutante year in Fosdm.

Fosdem == Friends

Conferences are always the place to meet, interact with people, make new friends, and meet old pals. I always prefer hallway tracks more than going to talks. Especially in the era where all the talks are getting recorded. Fosdem is no exception. I was attending a conference after some time, so it made it more special.

My adventure started two days before the conference by meeting saptaks after a year. It was the same as before, chatting about different technologies, communities, projects, and much more till midnight. The next day, I met Spot for lunch. I met him in 2019 for the first time after he started mentoring me. So we had a lot to catch up with. The Dinner was with Greg ,(whom I was meeting for the first time), Don , and the Red Hat OSPO team. Since I joined Red Hat (instead of before I joined), I have been curious about how our OSPO works. This is one of the primary teams that helps to keep the Red Hat culture and integrity" alive. I have followed Brian Proffitt&aposs work since I joined the Open Source Software community. At the Dinner, I got to converse with him on various topics, starting from open-source communities, OSPO&aposs different types and evolution of Fosdem and communities over the years, and Red Hat past, present, and future. It was fascinating and enlightening.

Conference Days == booth duty

Day 0 started early. We needed to reach early to set up the Ansible booth. Our super organizer Carol could not arrive during the booth set up due to a workers&apos strike in her hometown. Greg and Don already did the initial setup before I reached out. We tried to manage the booth in her absence. We were sharing the booth with Foreman . Carol reached before afternoon and took charge, and everything was smooth (as ever). How does she make it seem easy every time? People think organizing events and managing booth duties is always fun and easy, but it is a lot of planning and detailed work. And every time I see her doing planning, chalking out every single detail within the organization, community, organizing committee, and with the sponsors, I am in awe. Andrei and Rick also joined us in the booth. It was great to point the users to the respective upstream authors whenever there was a question regarding the collections or AWX :).

It was the first time I got to meet David and Felix.
I have learned so much from them regarding the different parts of the project, especially the release process. This was the highlight of my Fosdem.

Booth duty is something I enjoy. It is always wonderful to talk to your users and contributors. It is our chance to say "thank you" to them. However, managing the Ansible booth was strikingly different from managing booths for other projects. The attendees who visited our booth can broadly be grouped into the following three categories :

  • People who do not know about Ansible/ or are new in their Ansible journey
  • Users who use it extensively (in their day job, open source community project)
  • Contributors

Each of the section of people, given their varied interests and experience, asked questions which can roughly divided into the following :

  • What is Ansible? What does it do?
  • How is Ansible similar to projects A, B, or C?
  • How can we contribute?
  • How can we talk to the upstream?
  • I have x questions. Where can I get help?

These are not unique. But different sets come here. Mostly, the people came with saying, "Thank you for Ansible," "We love your project," "I use Ansible every day for x period, and it makes my life so easy," or "We like the new Ansible Forum, it makes our life so easy. " The next most common set was "We need this "y" feature; it will help our project. I will work on this." "I think our project needs this, I will work on this." and "We need to fix this, I need to work on this." This is where the difference comes in.
A. Our users seem (actually proved) happy and satisfied with the project.
B. We have an excellent contributor base that is loyal to the project. They do not only look for issues but also offer and take responsibility for the solution.
We did not have much criticism or feature requests, which is rare per my experience in booth duties. It clears up two aspects for Ansible,
the first one is that people trust our project and find it dependable.
Second, our project solves and takes care of the imminent issues around the part of infrastructure that.....

The fan-girl moments

deb_&_me.jpg

Conferences allow me to meet all the upstream authors of the projects I use daily, people I admire and aspire to follow their career paths. And Fosdem gives you plenty of such opportunities. Deborah Bryant, Nithya Ruff, Brian Proffitt, Jan Wildeboer, Rich Bown, Zaheda Bhorat. The conference becomes more magical when you get the &aposyou worth it&apos ... feeling from them. Be it having one-on-one conversations regarding open-source communities and business policies with Nitya. Or when Deb telling people, &aposHey, you Red Hatters, I wanted to hire her, but you got her before me.&apos These are the moments for which I get up each time I fail.

Thank you, Fosdem. Thank Open Source. Long Live community.

by Anwesha Das at February 29, 2024 07:26 PM

Making my first OnionShare release

One of the biggest bottlenecks in maintaining the OnionShare desktop application has been packaging and releasing the tool. Since OnionShare is a cross platform tool, we need to ensure that release works in most different desktop Operating Systems. To know more about the pain that goes through in making an OnionShare release, read the blogs[1][2][3] that Micah Lee wrote on this topic.

However, one other big bottleneck in our release process apart from all the technical difficulties is that Micah has always been the one making the releases, and even though the other maintainers are aware of process, we have never actually made a release. Hence, to mitigate that, we decided that I will be making the OnionShare 2.6.1 release.

PS: Since Micah has written pretty detailed blogs with code snippets, I am not going to include much code snippets (unless I made significant changes) to not lengthen this already long code further. I am going to keep this blog more like a narrative of my experience.

Getting the hardwares ready

Firstly, given the threat model of OnionShare, we decided that it is always good to have a clean machine to do the OnionShare release works, especially the signing part of things. Micah has already automated a lot of the release process using GitHub Actions over the years, but we still need to build the Apple Silicon versions of OnionShare manually and then merge with the Intel version to create a univeral2 app bundle.

Also, in general, it's a good practise to have and use the signing keys in a clean machine for a projective as sensitive as OnionShare that is used by people with high threat models. So I decided to get a new Macbook for the same. This would help me build the Apple Silicon version as well as sign the packages for the other Operating Systems.

Also, I received the HARICA signing keys from Glenn Sorrentino that is needed for signing the Windows releases.

Fixing the bugs, merging the PRs

After the 2.6.1-dev release was created, we noticed some bugs that we wanted to fix before making the 2.6.1. We fixed, reviewed and merged most of those bugs. Also, there were few older PRs and documentation changes from contributors that I wanted to be merged before making the release.

Translations

Localization is an important part of OnionShare since it enables users to use OnionShare in the language they are most comfortable with. There were quite some translation PRs. Also, emmapeel2 who always helps us with weblate wizardry, made certain changes in the setup, which I also wanted to include in this release.

After creating the release PR, I also need to check which languages are greater than 90% translated, and make a push to hopefully making some more languages pass that threshold, and finally make the OnionShare release with only the languages that cross that threshold.

Making the Release PR

And, then I started making the release PR. I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.

Updating dependencies in Snapcraft

Updating the poetry dependencies went pretty smoothly.

There was nothing much to update in the pluggable transport scripts as well.

But then I started updating and packaging for Snapcraft and Flatpak. Updating tor versions to the latest went pretty smoothly. In snapcraft, the python dependencies needed to be compared manually with the pyproject.toml. I definitely feel like we should automate this process in future, but for now, it wasn't too bad.

But trying to build snap with snapcraft locally just was not working for me in my system. I kept getting lxd errors that I was not fully sure what to do about. I decided to move ahead with flatpak packaging and wait to discuss the snapcraft issue with Micah later. I was satisfied that at least it was building through GitHub Actions.

Updating dependencies in Flatpak

Even though I read about the hardship that Micah had to go through with updating pluggable transports and python dependencies in flatpak packaging, I didn't learn my lesson. I decided, let's give it a try. I tried updating the pluggable transports and faced the same issue that Micah did. I tried modifying the tool, even manually updating the commits, but something or the other failed.

Then, I moved on to updating the python dependencies for flatpak. The generator code that Micah wrote for desktops worked perfectly, but the cli gave me pain. The format in which the dependencies were getting generated and the existing formats were not matching. And I didn't want to be too brave and change the format, since flatpak isn't my area of expertise. But, python kind of is. So I decided to check if I can update the flatpak-poetry-generator.py files to work. And I managed to fix that!

That helped me update the dependencies in flatpak.

MacOS and Windows Signing fun!

Creating Apple Silicon app bundle

As mentioned before, we still need to create an Apple Silicon bundle and then merge it with the Intel build generated from CI to get the universal2 app bundle. Before doing that, need to install the poetry dependencies, tor dependencies and the pluggable transport dependencies.

And I hit an issue again: our get-tor.py script is not working.

The script failed to verify the Tor Browser version that we were downloading. This has happened before, and I kind of doubted that Tor PGP script must have expired. I tried verifying manually and seems like that was the case. The subkey used for signing had expired. So I downloaded the new Tor Browser Developers signing keys, created a PR, and seems like I could download tor now.

Once that was done, I just needed to run:

/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac
rm -rf build/OnionShare.app/Contents/Resources/lib
mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build

And amazingly, it built successfully in the very first try! That was easy! Now I just need to merge the Intel app bundle and the Silicon app bundle and everything should work (Spoiler alert: It doesn't!).

Once the app bundle was created, it was time to sign and notarize. However, the process was a little difficult for me to do since Micah had previously used an individual account. So I passed on the universal2 bundle to him and moved on to signing work in Windows.

Signing the Windows package

I had to boot into my Windows 11 VM to finish the signing and making the windows release. Since this was the first time I was doing the release, I had to first get my VM ready by installing all the dependencies needed for signing and packaging. I am not super familiar with Windows development environment so had to figure out adding PATH and other such things to make all the dependencies work. The next thing to do was setting up the HARICA smart card.

Setting up the HARICA smart card

Thankfully, Micah had already done this before so he was able to help me out a bit. I had to log into the control panel, download and import certificates to my smart card and change the token password and administrator password for my smart card. Apart from the UI of the SafeNet client not being the best, everything else went mostly smoothly.

Since Micah had already made some changes to fix the code signing and packaging stuff, it went pretty smooth for me and I didn't face much obstructions. Science & Design, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare and hence the package now gets signed under the name of Science and Design Inc.

Meanwhile, Micah had got back to me saying that the universal2 bundle didn't work.

So, the Apple Silicon bundle didn't work

One of the mistakes that I made was I didn't test my Apple Silicon build. I thought I will test it once it is signed and notarized. However, Micah confirmed that even after signing and notarizing, the universal2 build is not working. It kept giving segmentation fault. Time to get back to debugging.

Downgrading cx-freeze to 6.15.9

The first thought that came to my mind was, Micah had made a dev build in October 2023. So the cx-freeze release from that time should still be building correctly. So I decided to try and do build (instead of bdist_mac) with the cx-freeze version at that time (which was 6.15.9) and check if the binary created works. And thankfully, that did work. I tried with 6.15.10 and it didn't. So I decided to stick to 6.15.9.

So let's try now running bdist_mac, create a .app bundle and hopefully everything will work perfectly! But nope! The command failed with:

OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory

So now I had a decision to make, should I try to monkey-patch this and just figure out how to fix this or try to make the latest cx-freeze work. I decided to give the latest cx-freeze (version 6.15.15) another try.

Trying zip_include_packages

So, one thing I noticed we were doing differently than what cx-freeze documentation and examples for PySide6 mentioned was we put our dependencies in packages, instead of zip_include_packages in the setup options.

    "build_exe": {
        "packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I thought, let's try moving all of the depencies into zip_include_packages from packages. Basically zip_include_packages includes the dependencies in the zip file, whereas packages place them in the file system and not the zip file. My guess was, the Apple Silicon configuration of how a .app bundle should be structured has changed. So the new options looked something like this:

    "build_exe": {
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "onionshare",
            "onionshare_cli",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

So I created a build using that, ran the binary, and it gave an error. But I was happy, because it wasn't segmentation fault. The error mainly because it was not able to import some functions from onionshare_cli. So as a next step, I decided to move everything apart from onionshare and onionshare_cli to zip_include_packages. It looked something like this:

    "build_exe": {
        "packages": [
            "onionshare",
            "onionshare_cli",
        ],
        "zip_include_packages": [
            "cffi",
            "engineio",
            "engineio.async_drivers.gevent",
            "engineio.async_drivers.gevent_uwsgi",
            "gevent",
            "jinja2.ext",
            "PySide6",
            "PySide6.QtCore",
            "PySide6.QtGui",
            "PySide6.QtWidgets",
        ],
        "excludes": [
            "test",
            "tkinter",
            ...
        ],
        ...
    }

This almost worked. Problem was, PySide 6.4 had changed how they deal with ENUMs and we were still using deprecated code. Now, fixing the deprecations would take a lot of time, so I decided to create an issue for the same and decided to deal with it after the release.

At this point, I was pretty frustrated, so I decided to do, what I didn't want to do. Just have both packages and zip_include_packages. So I did that, build the binary and it worked. I decided to make the .app bundle. It worked perfectly as well! Great!

I was a little worried that adding the dependencies in both packages and zip_include_packages might increase the size of the bundle, but surprisingle, it actually decreased the size compared to the dev build. So that's nice! I also realized that I don't need to replace the lib directory inside the .app bundle anymore. I ran the cleanup code, hit some FileNotFoundError, tried to find if the files were now in a different location, couldn't find them, decided to put them in a try-except block.

After that, I merged the silicon bundle with Intel bundle to create the universal2 bundle again, sent to Micah for signing, and seems like everything worked!

Creating PGP signature for all the builds

Now that we had all the build files ready, I tried installing and running them all, and seems like everything is working fine. Next, I needed to generate PGP signature for each of the build files and then create a GitHub release. However, Micah is the one who has always created signatures. So the options for us now were:

  • create an OnionShare GPG key that everyone uses
  • sign with my GPG and update documentations to reflect the same

The issue with creating a new OnionShare GPG key was distribution. The maintainers of OnionShare are spread across timezones and continents. So we decided to create signature with my GPG and update the documentation on how to verify the downloads.

Concluding the release

Once the signatures were done, the next steps were mostly straightforward:

  • Create a GitHub release
  • Publish onionshare-cli on PyPi
  • Push the build and signatures to the onionshare.org servers and update the website and docs
  • Create PRs in Flathub and Homebrew cask
  • Make the snapcraft edge to stable

The above went pretty smooth without much difficulty. Once everything was merged, it was time to make an announcement. Since Micah has been doing the announcements, we decided to stick with that for the release so that it reaches to more people.

February 29, 2024 12:41 PM

Securing via systemd, a story

Last night I deployed a https://writefreely.org based blog and secured it with systemd by adding DynamicUser=yes. But the service itself could not write to the sqlite database.

Feb 28 21:37:52 kushaldas.se writefreely[1652088]: ERROR: 2024/02/28 21:37:52 database.go:3000: Couldn't insert into posts: attempt to write a readonly database

Today morning I realized that the settings blocked writing to all paths except few temporary ones. I had to add a StateDirectory and used the same in WorkingDirectory so that the service works correctly.

February 29, 2024 11:36 AM

Babel


cover of the book, Babel. A greyscale image with lots of medieval line art in the back depicting old England, with the word Babel in a decorative fond running down the middle

image courtesy, Harper Collins India


Reading Babel, constantly gave me strong déjà vu as I leafed through its pages.1
Until I realised what I was experiencing was an author enjoying their words, their lines, their exquisite craft. Where words themselves crawl into your mind, and say yes, we are. We will tell you this story. You need not make any effort here.
Richard Powers’, The Overstory was that way. So was Arundhati Roy’s, The God of Small Things. And Tamysyn Muir, with her Locked Tomb books.

This is alternate history Empire. The British still rule the world, but here they do so, on the strength of magic, silver-work and words and language.
Kuang then uses all that heft to bash our brains in.2
There’s Robin and Ramy and Victoire and Letty and it’s them against the world3

This was one of those rare books4 where I wanted very little to do with the story and just wanted to lose myself in the world. I wanted to study languages the way they did, and do magic the way they did. The way some of them come to choose violence reminded me of Bhagat and Azad.

The history’s wonderful.
The way she uses language and words, even more so!


My highlights, from the book

Inside, the heady wood-dust smell of freshly printed books was overwhelming. If tobacco smelled like this, Robin thought, he’d huff it every day. He stepped towards the closest shelf, hand lifted tentatively towards the books on display, too afraid to touch them – they seemed so new and crisp; their spines were uncracked, their pages smooth and bright. Robin was used to well-worn, waterlogged tomes; even his Classics grammars were decades old. These shiny, freshly bound things seemed like a different class of object, things to be admired from a distance rather than handled and read.

‘If you can see?’ The woman raised her voice and overenunciated her every syllable, as if Robin had difficulty hearing. (This had happened often to Robin on the Countess of Harcourt; he could never understand why people treated those who couldn’t understand English as if they were deaf.)

‘But that’s impossible for me,’ said Ramy. ‘I have to play a part. Back in Calcutta, we all tell the story of Sake Dean Mahomed, the first Muslim from Bengal to become a rich man in England. He has a white Irish wife. He owns property in London. And you know how he did it? He opened a restaurant, which failed; and then he tried to be hired as a butler or valet, which also failed. And then he had the brilliant idea of opening a shampoo house in Brighton.’ Ramy chuckled. ‘Come and get your healing vapours! Be massaged with Indian oils! It cures asthma and rheumatism; it heals paralysis. Of course, we don’t believe that at home. But all Dean Mahomed had to do was give himself some medical credentials, convince the world of this magical Oriental cure, and then he had them eating out of the palm of his hand. So what does that tell you, Birdie? If they’re going to tell stories about you, use it to your advantage. The English are never going to think I’m posh, but if I fit into their fantasy, then they’ll at least think I’m royalty.’

‘Does Mirza really mean “prince”?’ Robin asked, after he’d overheard Ramy declare this to a shopkeeper for the third time.
‘Sure. Well, really, it’s a title – it’s derived from the Persian Amīrzādeh, but “prince” comes close enough.’
‘Then are you—?’
‘No.’ Ramy snorted. ‘Well. Perhaps once. That’s the family story, anyhow; my father says we were aristocrats in the Mughal court, or something like that. But not anymore.’
‘What happened?’
Ramy gave him a long look. ‘The British, Birdie. Keep up.’

After all, we’re here to make the unknown known, to make the other familiar. We’re here to make magic with words.’

‘Which seems right to you? Do we try our hardest, as translators, to render ourselves invisible? Or do we remind our reader that what they are reading was not written in their native language?’
‘That’s an impossible question,’ said Victoire. ‘Either you situate the text in its time and place, or you bring it to where you are, here and now. You’re always giving something up.’
‘Is faithful translation impossible, then?’ Professor Playfair challenged. ‘Can we never communicate with integrity across time, across space?’
‘I suppose not,’ Victoire said reluctantly.
‘But what is the opposite of fidelity?’ asked Professor Playfair. He was approaching the end of this dialectic; now he needed only to draw it to a close with a punch. ‘Betrayal. Translation means doing violence upon the original, means warping and distorting it for foreign, unintended eyes. So then where does that leave us? How can we conclude, except by acknowledging that an act of translation is then necessarily always an act of betrayal?’

we can think of etymology as an exercise in tracing how far a word has strayed from its roots. For they travel marvellous distances, both literally and metaphorically.’ He looked suddenly at Robin. ‘What’s the word for a great storm in Mandarin?’
Robin gave a start. ‘Ah – fēngbào?’
‘No, give me something bigger.’
‘Táifēng?’
‘Good.’ Professor Lovell pointed to Victoire. ‘And what weather patterns are always drifting across the Caribbean?’
‘Typhoons,’ she said, then blinked. ‘Taifeng? Typhoon? How—’
‘We start with Greco-Latin,’ said Professor Lovell. ‘Typhon was a monster, one of the sons of Gaia and Tartarus, a devastating creature with a hundred serpentine heads. At some point he became associated with violent winds, because later the Arabs started using tūfān to describe violent, windy storms. From Arabic it hopped over to Portuguese, which was brought to China on explorers’ ships.’
‘But táifēng isn’t just a loanword,’ said Robin. ‘It means something in Chinese – tái is great, and fēng is wind–’
‘And you don’t think the Chinese could have come up with a transliteration that had its own meaning?’ asked Professor Lovell. ‘This happens all the time. Phonological calques are often semantic calques as well. Words spread. And you can trace contact points of human history from words that have uncannily similar pronunciations. Languages are only shifting sets of symbols – stable enough to make mutual discourse possible, but fluid enough to reflect changing social dynamics.

And the influences on English were so much deeper and more diverse than they thought. Chit came from the Marathi chitti, meaning ‘letter’ or ‘note’. Coffee had made its way into English by way of Dutch (koffie), Turkish (kahveh), and originally Arabic (qahwah). Tabby cats were named after a striped silk that was in turn named for its place of origin: a quarter of Baghdad named al-‘Attābiyya. Even basic words for clothes all came from somewhere. Damask came from cloth made in Damascus; gingham came from the Malay word genggang, meaning ‘striped’; calico referred to Calicut in Kerala, and taffeta, Ramy told them, had its roots in the Persian word tafte, meaning ‘a shiny cloth’.

English did not just borrow words from other languages; it was stuffed to the brim with foreign influences, a Frankenstein vernacular. And Robin found it incredible, how this country, whose citizens prided themselves so much on being better than the rest of the world, could not make it through an afternoon tea without borrowed goods.

‘The Germans have this lovely word, Sitzfleisch,’ Professor Playfair said pleasantly when Ramy protested that they had over forty hours of reading a week. ‘Translated literally, it means “sitting meat”. Which all goes to say, sometimes you need simply to sit on your bottom and get things done.

He waved a hand, gesturing at an invisible map. ‘It’s junctures like that where we have control. If we push in the right spots – if we create losses where the Empire can’t stand to suffer them – then we’ve moved things to the breaking point. Then the future becomes fluid, and change is possible. History isn’t a premade tapestry that we’ve got to suffer, a closed world with no exit. We can form it. Make it. We just have to choose to make it.’5

Oxford, and Babel by extension, were, at their roots, ancient religious institutions, and for all their contemporary sophistication, the rituals that comprised university life were still based in medieval mysticism. Oxford was Anglicanism was Christianity, which meant blood, flesh, and dirt.*

‘The British aren’t going to invade with English troops. They’re going to invade with troops from Bengal and Bombay. They’re going to have sepoys fight the Afghans, just like they had sepoys fight and die for them at Irrawaddy, because those Indian troops have the same logic you do, which is that it’s better to be a servant of the Empire, brutal coercion and all, than to resist. Because it’s safe. Because it’s stable, because it lets them survive. And that’s how they win, brother. They pit us against each other. They tear us apart.’

‘I don’t think I’ll ever forget what I saw.’ He rested his elbows against the bridge and sighed. ‘Rows and rows of flowers. A whole ocean of them. They’re such bright scarlet that the fields look wrong, like the land itself is bleeding. It’s all grown in the countryside. Then it gets packed and transported to Calcutta, where it’s handed off to private merchants who bring it straight here. The two most popular opium brands here are called Patna and Malwa. Both regions in India. From my home straight to yours, Birdie. Isn’t that funny?’ Ramy glanced sideways at him. ‘The British are turning my homeland into a narco-military state to pump drugs into yours. That’s how this empire connects us.’

Free trade. This was always the British line of argument – free trade, free competition, an equal playing field for all. Only it never ended up that way, did it? What ‘free trade’ really meant was British imperial dominance, for what was free about a trade that relied on a massive build-up of naval power to secure maritime access? When mere trading companies could wage war, assess taxes, and administer civil and criminal justice?

Robin saw a great spider’s web in his mind then. Cotton from India to Britain, opium from India to China, silver becoming tea and porcelain in China, and everything flowing back to Britain. It sounded so abstract – just categories of use, exchange, and value – until it wasn’t; until you realized the web you lived in and the exploitations your lifestyle demanded, until you saw looming above it all the spectre of colonial labour and colonial pain.

My point being, abolition happened because white people found reasons to care – whether those be economic or religious. You just have to make them think they came up with the idea themselves. You can’t appeal to their inner goodness. I have never met an Englishman I trusted to do the right thing out of sympathy.’

‘Eventually.’
Anthony laughed gently. ‘Do you think abolition was a matter of ethics? No, abolition gained popularity because the British, after losing America, decided that India was going to be their new golden goose. But cotton, indigo, and sugar from India weren’t going to dominate the market unless France could be edged out, and France would not be edged out, you see, as long as the British slave trade was making the West Indies so very profitable for them.’

Colonialism is not a machine capable of thinking, a body endowed with reason. It is naked violence and only gives in when confronted with greater violence.
— FRANTZ FANON, The Wretched of the Earth, trans. Richard Philcox

‘It’s so odd,’ Robin said. Back then they’d already passed the point of honesty; they spoke to one another unfiltered, unafraid of the consequences. ‘It’s like I’ve known you forever.’ ‘Me too,’ Ramy said. ‘And that makes no sense,’ said Robin, drunk already, though there was no alcohol in the cordial. ‘Because I’ve known you for less than a day, and yet . . .’ ‘I think,’ said Ramy, ‘it’s because when I speak, you listen.’ ‘Because you’re fascinating.’ ‘Because you’re a good translator.’ Ramy leaned back on his elbows. ‘That’s just what translation is, I think. That’s all speaking is. Listening to the other and trying to see past your own biases to glimpse what they’re trying to say. Showing yourself to the world, and hoping someone else understands.’

how could there ever be an Adamic language? The thought now made him laugh. There was no innate, perfectly comprehensible language; there was no candidate, not English, not French, that could bully and absorb enough to become one. Language was just difference. A thousand different ways of seeing, of moving through the world. No; a thousand worlds within one. And translation – a necessary endeavour, however futile, to move between them.


Feedback on this post? Mail me at feedback at this domain

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. ok, I swiped on my Elipsa. But you get the idea! ↩︎

  2. in a good way ↩︎

  3. a lot like Bardugo’s, Six of Crows ↩︎

  4. the only other one for me, was The Lord of the Rings. I suspect Kuang, just like Tolkien, had the words and the language and the world ready. And then she just when and wrote a book to show some of it. ↩︎

  5. reminds me of Steve Jobs, and his dent in the universe↩︎

February 29, 2024 03:42 AM

Mullvad VPN repository for Fedora

desktop client

Mullvad VPN now has proper rpm repository for their desktop client. You can use it in the following way on you Fedora system:

sudo dnf config-manager --add-repo https://repository.mullvad.net/rpm/stable/mullvad.repo
sudo dnf install mullvad-vpn

Remember to verify the OpenPGP key Fingerprint:

Importing GPG key 0x66DE8DDF:
 Userid     : "Mullvad (code signing) <admin@mullvad.net>"
 Fingerprint: A119 8702 FC3E 0A09 A9AE 5B75 D5A1 D4F2 66DE 8DDF
 From       : https://repository.mullvad.net/rpm/mullvad-keyring.asc
February 27, 2024 05:37 PM

django-ca, HSM and PoC

django-ca is a feature rich certificate authority written in Python, using the django framework. The project exists for long, have great documentation and code comments all around. As I was looking around for possible CAs which can be used in multiple projects at work, django-ca seems to be a good base fit. Though it has still a few missing parts (which are important for us), for example HSM support and Certificate Management over CMS.

I started looking into the codebase of django-ca more and meanwhile also started cleaning up (along with Magnus Svensson) another library written at work for HSM support. I also started having conversion with Mathias (who is the author of django-ca) about this feature.

Thanks to the amazing design of the Python Cryptography team, I could just add several Private key implementations in our library, which in turn can be used as a normal private key.

I worked on a proof of concept branch (PoC), while getting a lot of tests also working.

===== 107 failed, 1654 passed, 32 skipped, 274 errors in 286.03s (0:04:46) =====

Meanwhile Mathias also started writing a separate feature branch where he is moving the key operations encapsulated inside of backends, and different backends can be implemented to deal with HSM or normal file based storage. He then chatted with me on Signal over 2 hours explaining the code and design of the branch he is working on. He also taught me many other django/typing things which I never knew before in the same call. His backend based approach makes my original intention of adding HSM support very easy. But, it also means at first he has to modify the codebase (and the thousands of test cases) first.

I am writing this blog post also to remind folks that not every piece of code needs to go to production (or even merged). I worked on a PoC, that validates the idea. And then we have a better and completely different design. It is perfectly okay to work hard for a PoC and later use a different approach.

As some friends asked on Mastodon, I will do a separate post about the cleanup of the other library.

February 25, 2024 09:25 AM

New Blog

New Blog

This is beginning of my new blog! While https://blog.araj.me was previously running on Ghost as well, this is a new install. Primarily, because i couldn&apost easily get the data back from my previous ghost install. It still lives in a mysql instance, so old posts might appear on this instance too if I feel like it at some point.

What am I going to write about? I&aposve been working a lot on my homelab setup so that is probably going to be the starting point. I have also been trying out OpenWRT for my router (running on an Edgerouter X, who could&aposve thought it can run with 95% space available and over 65% free memory) and struggling to re-configure VLANs to segregate my homelab, "regular internet" for my wife and guests and IoT stuff. Setting up VLANs on OpenWRT was not fun, I took down the internet a couple of times, which wasn&apost appreciated at home. So, I ended up flashing another old TP-Link router I had to learn OpenWRT so I try out settings there before I apply it to main router.

My Homelab currently runs on an Intel NUC 10 i7 (6C12T, 16G RAM), which has been plenty for my current use cases. I&aposve over-provisioned it with Proxmox VE as the hypervisor of choice. I am using an actual hypervisor based setup for the first time and there is no going back now! For some reason, I tried out Xcp-ng as well but with XOA, I couldn&apost figure out how to do some stuff, so that setup is currently turned off. Maybe I&aposll dust it off again at some point. I do have 2 more nodes in the standby to run more things, but that&aposll probably happen once I shift to my new house (hopefully soon!).

by Abhilash Raj at January 10, 2024 05:44 PM

Missing rubygem json-canonicalization 0.3.2

I did not upgrade our mastodon server to 4.2.0 from 4.1.9 for a long time. Finally while doing so in the morning, I got the following error with the bundle install command.

Your bundle is locked to json-canonicalization (0.3.2) from rubygems repository
https://rubygems.org/ or installed locally, but that version can no longer be
found in that source. That means the author of json-canonicalization (0.3.2) has
removed it. You'll need to update your bundle to a version other than
json-canonicalization (0.3.2) that hasn't been removed in order to install.

I have no clue about how Ruby works, but somehow only updating the lockfile via bundle lock --update json-canonicalization did not help. Finally updated the Gemfile.lock file to have json-canonicalization (0.3.3) manually. That solved the issue and I could continue with the update steps.

January 03, 2024 05:10 PM

Documentation of Puppet code using sphinx

Sphinx is the primary documentation tooling for most of my projects. I use it for the Linux command line book too. Last Friday while in a chat with Leif about documenting all of our puppet codebase, I thought of mixing these too.

Now puppet already has a tool to generate documentation from it's code, called puppet strings. We can use that to generate markdown output and then use the same in sphix for the final HTML output.

I am using https://github.com/simp/pupmod-simp-simplib as the example puppet code as it comes with good amount of reference documentation.

Install puppet strings and the dependencies

$ gem install yard puppet-strings

Then cloning puppet codebase.

$ git clone https://github.com/simp/pupmod-simp-simplib

Finally generating the initial markdown output.

$ puppet strings generate --format markdown --out simplib.md
Files                     161
Modules                   3 (3 undocumented)
Classes                   0 (0 undocumented)
Constants                 0 (0 undocumented)
Attributes                0 (0 undocumented)
Methods                   5 (0 undocumented)
Puppet Tasks              0 (0 undocumented)
Puppet Types              7 (0 undocumented)
Puppet Providers          8 (0 undocumented)
Puppet Plans              0 (0 undocumented)
Puppet Classes            2 (0 undocumented)
Puppet Data Type Aliases  73 (0 undocumented)
Puppet Defined Types      1 (0 undocumented)
Puppet Data Types         0 (0 undocumented)
Puppet Functions          68 (0 undocumented)
 98.20% documented

sphinx setup

python3 -m venv .venv
source .venv/bin/activate
python3 -m pip install sphinx myst_parser

After that create a standard sphinx project or use your existing one, and update the conf.py with the following.

extensions = ["myst_parser"]
source_suffix = {
    '.rst': 'restructuredtext',
    '.txt': 'markdown',
    '.md': 'markdown',
}

Then copy over the generated markdown from the previous step and use sed command to update the title of the document to something better.

$ sed -i '1 s/^.*$/SIMPLIB Documenation/' simplib.md

Don't forget to add the simplib.md file to your index.rst and then build the HTML documentation.

$ make html

We can still improve the markdown generated by the puppet strings command, have to figure out simpler ways to do that part.

Example output

September 25, 2023 09:23 AM

Ansible 8.3.0 out now

I released the Ansible 8.3.0 on 15th August, 2023. This is the Ansible stable release. You can read the full Changelog here

You can install it via pip.

python3 -m pip install ansible==8.3.0 --user

You can have a look at the announcement.

Fun fact while working on the release Github worked below 30 KiB/s speed the whole day.

ansible_logo

To follow all our updates on Ansible project and community susbcribe to Bullhorn, our weekly Newsletter. Join us in our community discussion at this Matrix room.

by Anwesha Das at August 17, 2023 02:59 PM

Safeguarding Our Digital Lives: As Prevention is Better than the Cure

Today, I stumbled upon some deeply concerning news regarding the unauthorized leak of private pictures belonging to a 16-year-old girl from her online account. This incident serves as a stark reminder of the risks we face in the digital world. We must exercise caution and thoughtfulness when sharing anything online, as once something is uploaded, it can be extremely challenging and almost impossible to completely remove it. Almost all of us know the trouble we have to go to remove our own pictures from fake profiles from social media and their customer support is nearly non-existent.

I want to stress the importance of educating our friends and family about these risks. Here are some key points to consider:

  1. Think before you upload: Carefully consider the content you are about to share online. Once it is out there, it becomes difficult to control who sees it or how it is used. Only upload content that you are comfortable sharing with others.
  2. Enable two-factor authentication (2FA): Always activate two-factor authentication whenever possible to enhance the security of your online accounts. This additional layer of protection requires a second verification step, usually through a code sent to your mobile device.
  3. Exercise caution with links and apps: Be wary of clicking on unfamiliar links, as they may lead to malicious websites or prompt the installation of harmful applications. Verify the authenticity of links and only download apps from trusted sources like official app stores.
  4. Keep your devices secure: Never leave your phone or laptop unattended, especially in public places. These devices contain personal information that could be exploited if they fall into the wrong hands. Always use strong, unique passwords and enable biometric authentication methods, such as fingerprint or facial recognition, whenever possible.
  5. Practice caution on social media: Fake accounts and online impersonation are unfortunate realities. To mitigate the risks, only accept friend or follower requests from people you know and trust. Be cautious when sharing personal information and regularly review your privacy settings to ensure they align with your comfort level.

It is crucial to spread awareness about these practices among our friends and family. By adopting them, we can help safeguard our digital lives and reduce the chances of falling victim to online privacy breaches. Additionally, if someone threatens to leak your information, it is important to confide in an elder whom you trust and then lodge a complaint. You can use resources such as https://cybercrime.gov.in/ to report the incident and seek appropriate assistance.

While the aforementioned points are preventive measures, if you find yourself in a troubling situation where your private information has already been leaked, the first step is to confide in an elder whom you trust. They can provide guidance and support. It is also essential to lodge a complaint using resources like https://cybercrime.gov.in/ to ensure the appropriate authorities are aware of the situation and can take necessary actions.

Remember, it is never too late to seek help, and reporting such incidents is crucial for your protection and the well-being of others. Let’s continue to raise awareness and empower ourselves and those around us to navigate the digital world safely and responsibly.

by shivam at July 10, 2023 12:43 PM

Safeguarding Our Digital Lives: As Prevention is Better than the Cure

Today, I stumbled upon some deeply concerning news regarding the unauthorized leak of private pictures belonging to a 16-year-old girl from her online account. This incident serves as a stark reminder of the risks we face in the digital world. We must exercise caution and thoughtfulness when sharing anything online, as once something is uploaded, it can be extremely challenging and almost impossible to completely remove it. Almost all of us know the trouble we have to go to remove our own pictures from fake profiles from social media and their customer support is nearly non-existent.

I want to stress the importance of educating our friends and family about these risks. Here are some key points to consider:

  1. Think before you upload: Carefully consider the content you are about to share online. Once it is out there, it becomes difficult to control who sees it or how it is used. Only upload content that you are comfortable sharing with others.
  2. Enable two-factor authentication (2FA): Always activate two-factor authentication whenever possible to enhance the security of your online accounts. This additional layer of protection requires a second verification step, usually through a code sent to your mobile device.
  3. Exercise caution with links and apps: Be wary of clicking on unfamiliar links, as they may lead to malicious websites or prompt the installation of harmful applications. Verify the authenticity of links and only download apps from trusted sources like official app stores.
  4. Keep your devices secure: Never leave your phone or laptop unattended, especially in public places. These devices contain personal information that could be exploited if they fall into the wrong hands. Always use strong, unique passwords and enable biometric authentication methods, such as fingerprint or facial recognition, whenever possible.
  5. Practice caution on social media: Fake accounts and online impersonation are unfortunate realities. To mitigate the risks, only accept friend or follower requests from people you know and trust. Be cautious when sharing personal information and regularly review your privacy settings to ensure they align with your comfort level.

It is crucial to spread awareness about these practices among our friends and family. By adopting them, we can help safeguard our digital lives and reduce the chances of falling victim to online privacy breaches. Additionally, if someone threatens to leak your information, it is important to confide in an elder whom you trust and then lodge a complaint. You can use resources such as https://cybercrime.gov.in/ to report the incident and seek appropriate assistance.

While the aforementioned points are preventive measures, if you find yourself in a troubling situation where your private information has already been leaked, the first step is to confide in an elder whom you trust. They can provide guidance and support. It is also essential to lodge a complaint using resources like https://cybercrime.gov.in/ to ensure the appropriate authorities are aware of the situation and can take necessary actions.

Remember, it is never too late to seek help, and reporting such incidents is crucial for your protection and the well-being of others. Let’s continue to raise awareness and empower ourselves and those around us to navigate the digital world safely and responsibly.

by shivam at July 10, 2023 12:43 PM

Two more Ansible Releases

After coming from DevConf.cz, on 22nd of June I released Ansible Community Package 7.7.0 and 8.1.0. This is the last release of the Ansible 7 series and first minor release of the Ansible 8 series.

Ansible Community Package 7.7.0

Ansible 7.7.0 requires latest version of ansible-core 2.14 and includes a curated set of Ansible collections that provides a vast number of modules and plugins.

One can have a look at the full Changelog.

You can install it via pip.

pip install ansible==7.7.0 --user

Ansible Community Package 8.1.0

Ansible 8.1.0 requires latest version of ansible-core 2.15.1 and includes a curated set of Ansible collections providing a huge number of modules and plugins.

You can get the Changelog here.

One can install Ansible 8.1.0 via pip

$ python3 -m pip install ansible==8.1.0 --user

Roadmap for future releases

You can read our Roadmap for the Ansible 8 release cycle here and the changelog here To follow all our updates on Ansible project and community susbcribe to Bullhorn, our weekly Newsletter. ETA for next Ansible Community Package release, i.e 8.2.0, is 18th July, 2023.

by Anwesha Das at July 03, 2023 03:41 PM

Upgrading Kubernetes Cluster

June 08, 2023


Disclaimer:

Just trying to document the process (strictly) for me.

This documentation is just for educational purpose.

The process won’t follow for any production cluster!

Aim

To upgrade a Kubernetes cluster with nodes running Kubernetes Version v1.26.4 to v1.27.2

I’m using a Kubernetes cluster created using Kind, for the example sake.

[STEP 1] Create a kind Kubernetes cluster

Use the following kind-config.yaml file:

# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.26.4@sha256:f4c0d87be03d6bea69f5e5dc0adb678bb498a190ee5c38422bf751541cebe92e
- role: worker
  image: kindest/node:v1.26.4@sha256:f4c0d87be03d6bea69f5e5dc0adb678bb498a190ee5c38422bf751541cebe92e

Please note:

  • The above config file for Kind cluster will create a Kubernetes cluster with 2 nodes:
    • Control Plane Node (name: kind-control-plane) Kubernetes Version: v1.26.4
    • Worker Node (name: kind-worker) Kubernetes Version: v1.26.4

Run the following command to create the cluster:

$ kind create cluster --config kind-config.yaml 
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.26.4) 🖼
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

Verify if cluster came up successfully:

$ kubectl get nodes -o wide

NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   2m43s   v1.26.4   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready    <none>          2m25s   v1.26.4   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

Note that the version of both nodes is currently v1.26.4


[STEP 2] Upgrade the control plane node

Exec inside the docker container corresponding to the control plane node (kind-control-plane):

$ docker exec -it kind-control-plane bash

root@kind-control-plane:/# 

Install the utility packages:

root@kind-control-plane:/# apt-get update && apt-get install -y apt-transport-https curl gnupg
root@kind-control-plane:/# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kind-control-plane:/# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kind-control-plane:/# apt-get update

Check which version to upgrade to (in our case, we’re checking if v1.27.2 is available)

root@kind-control-plane:/# apt-cache madison kubeadm

  kubeadm |  1.27.2-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.27.1-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.27.0-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.26.5-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  kubeadm |  1.26.4-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
  ...

Upgrade Kubeadm to the required version:

root@kind-control-plane:/# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.27.2-00 && apt-mark hold kubeadm

...
Setting up kubeadm (1.27.2-00) ...

Configuration file '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** 10-kubeadm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ...
kubeadm set on hold.
...

root@kind-control-plane:/# kubeadm version

kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:18:49Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}

Check & Verify the Kubeadm upgrade plan:

root@kind-control-plane:/# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.26.4
[upgrade/versions] kubeadm version: v1.27.2
[upgrade/versions] Target version: v1.27.2
[upgrade/versions] Latest version in the v1.26 series: v1.26.5
W0608 12:57:04.800282    5535 compute.go:307] [upgrade/versions] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     1 x v1.26.4   v1.26.5
            1 x v1.27.2   v1.26.5

Upgrade to the latest version in the v1.26 series:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.26.4   v1.26.5
kube-controller-manager   v1.26.4   v1.26.5
kube-scheduler            v1.26.4   v1.26.5
kube-proxy                v1.26.4   v1.26.5
CoreDNS                   v1.9.3    v1.10.1
etcd                      3.5.6-0   3.5.7-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.26.5

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     1 x v1.26.4   v1.27.2
            1 x v1.27.2   v1.27.2

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.26.4   v1.27.2
kube-controller-manager   v1.26.4   v1.27.2
kube-scheduler            v1.26.4   v1.27.2
kube-proxy                v1.26.4   v1.27.2
CoreDNS                   v1.9.3    v1.10.1
etcd                      3.5.6-0   3.5.7-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.27.2

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

We will upgrade to the late stable version (v1.27.2):

root@kind-control-plane:/# kubeadm upgrade apply v1.27.2

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.27.2"
[upgrade/versions] Cluster version: v1.26.4
[upgrade/versions] kubeadm version: v1.27.2
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
W0608 12:59:23.499649    5571 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0608 13:00:07.900906    5571 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.7" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.27.2" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
W0608 13:00:48.303106    5571 staticpods.go:305] [upgrade/etcd] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0608 13:00:48.305410    5571 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests56128700"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2613181160/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.27.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

I’m skipping upgrading the CNI (I don’t have any additional CNI provider plugin, other than what defaults to Kind cluster ~ kindnet)

But if you need to check how kindnet is working, do following inside the control plane node:

root@kind-control-plane:/# crictl ps

...
5715f2f6e401c       b0b1fa0f58c6e       8 minutes ago       Running             kindnet-cni               2                   3d78434184edf       kindnet-blltq
...
root@kind-control-plane:/# crictl logs 5715f2f6e401c   
I0608 13:02:38.079089       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:38.080550       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:38.080592       1 main.go:93] apiserver not reachable, attempt 0 ... retrying
I0608 13:02:38.080600       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:38.081047       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:38.081072       1 main.go:93] apiserver not reachable, attempt 1 ... retrying
I0608 13:02:39.081260       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:39.082375       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:39.082405       1 main.go:93] apiserver not reachable, attempt 2 ... retrying
I0608 13:02:41.082727       1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:41.083924       1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:41.083963       1 main.go:93] apiserver not reachable, attempt 3 ... retrying
I0608 13:02:44.085510       1 main.go:316] probe TCP address kind-control-plane:6443
I0608 13:02:44.088241       1 main.go:102] connected to apiserver: https://kind-control-plane:6443
I0608 13:02:44.088270       1 main.go:107] hostIP = 172.18.0.3
podIP = 172.18.0.3
I0608 13:02:44.088459       1 main.go:116] setting mtu 1500 for CNI 
I0608 13:02:44.088536       1 main.go:146] kindnetd IP family: "ipv4"
I0608 13:02:44.088559       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
I0608 13:02:44.278193       1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]
I0608 13:02:44.278210       1 main.go:227] handling current node
I0608 13:02:44.280741       1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]
I0608 13:02:44.280753       1 main.go:250] Node kind-worker has CIDR [10.244.1.0/24] 
I0608 13:02:54.293198       1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]

Now, before we go and upgrade the kubelet & kubectl (& restart the services),

Open a new terminal (outside the docker exec) and mark the node unschedulable (cordon) and then evict the workload (drain)

# Outside the docker exec terminal
$ kubectl drain kind-control-plane --ignore-daemonsets

node/kind-control-plane cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-blltq, kube-system/kube-proxy-rfbd5
evicting pod local-path-storage/local-path-provisioner-6bd6454576-xlvmc
pod/local-path-provisioner-6bd6454576-xlvmc evicted
node/kind-control-plane drained

$ kubectl get nodes -o wide

NAME                 STATUS                     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready,SchedulingDisabled   control-plane   47m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready                      <none>          47m   v1.26.4   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

Now, come back to the former terminal with the docker exec (into control-plane node):

And upgrade the kubelet and kubectl:

root@kind-control-plane:/# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.27.2-00 kubectl=1.27.2-00 && apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:2 http://deb.debian.org/debian bullseye InRelease
Hit:3 http://deb.debian.org/debian-security bullseye-security InRelease             
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease                       
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease             
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
kubectl is already the newest version (1.27.2-00).
kubectl set to manually installed.
kubelet is already the newest version (1.27.2-00).
kubelet set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
kubelet set on hold.

And now restart the kubelet:

root@kind-control-plane:/# systemctl daemon-reload
root@kind-control-plane:/# systemctl restart kubelet

And now go back to the other terminal outside the docker exec, and uncordon the node:

$ kubectl uncordon kind-control-plane

node/kind-control-plane uncordoned

And that’s everything for the control plane upgrade! Just check at last if it is running properly!

$ kubectl get nodes -o wide

NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   52m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready    <none>          51m   v1.26.4   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

And don’t forget to exit from the docker exec terminal (kind-control-plane):

root@kind-control-plane:/# exit
exit

[STEP 3] Upgrade the worker node

Exec inside the docker container corresponding to the worker node (kind-worker):

$ docker exec -it kind-worker bash
root@kind-worker:/# 

Install the utility packages:

root@kind-worker:/#  apt-get update && apt-get install -y apt-transport-https curl gnupg
root@kind-worker:/#  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kind-worker:/#  cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kind-worker:/#  apt-get update

Upgrade Kubeadm to the required version:

root@kind-worker:/# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.27.2-00 && apt-mark hold kubeadm

...
Setting up kubeadm (1.27.2-00) ...

Configuration file '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
 ==> File on system created by you or by a script.
 ==> File also in package provided by package maintainer.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** 10-kubeadm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ...
kubeadm set on hold.

Run kubeadm upgrade (For worker nodes this upgrades the local kubelet configuration):

root@kind-worker:/# kubeadm upgrade node

[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2909228160/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

Now, before we go and upgrade the kubelet & kubectl (& restart the services),

Open a new terminal (outside the docker exec of kind-worker container) and mark the node unschedulable (cordon) and then evict the workload (drain)

# Outside the docker exec terminal
$ kubectl drain kind-worker --ignore-daemonsets

node/kind-worker cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qpx8l, kube-system/kube-proxy-5xf5d
evicting pod local-path-storage/local-path-provisioner-6bd6454576-km824
evicting pod kube-system/coredns-5d78c9869d-mvgjq
evicting pod kube-system/coredns-5d78c9869d-zrmm4
pod/coredns-5d78c9869d-mvgjq evicted
pod/coredns-5d78c9869d-zrmm4 evicted
pod/local-path-provisioner-6bd6454576-km824 evicted
node/kind-worker drained

$ kubectl get nodes -o wide
NAME                 STATUS                     ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready                      control-plane   62m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready,SchedulingDisabled   <none>          61m   v1.27.2   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

Now, come back to the former terminal with the docker exec (into kind-worker container):

And upgrade the kubelet and kubectl:

root@kind-control-plane:/# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.27.2-00 kubectl=1.27.2-00 && apt-mark hold kubelet kubectl

kubelet was already not hold.
kubectl was already not hold.
Hit:2 http://deb.debian.org/debian bullseye InRelease            
Hit:3 http://deb.debian.org/debian-security bullseye-security InRelease
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease                      
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease            
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
kubectl is already the newest version (1.27.2-00).
kubectl set to manually installed.
kubelet is already the newest version (1.27.2-00).
kubelet set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
kubelet set on hold.
kubectl set on hold.

And now restart the kubelet:

root@kind-worker:/# systemctl daemon-reload
root@kind-worker:/# systemctl restart kubelet

And now go back to the other terminal outside the docker exec, and uncordon the node:

$ kubectl uncordon kind-worker

node/kind-worker uncordoned

And that’s everything for the worker node upgrade! Just check at last if it is running properly!

$ kubectl get nodes -o wide
NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   67m   v1.27.2   172.18.0.3    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21
kind-worker          Ready    <none>          66m   v1.27.2   172.18.0.2    <none>        Debian GNU/Linux 11 (bullseye)   5.15.0-73-generic   containerd://1.6.21

And don’t forget to exit from the docker exec terminal (kind-worker):

root@kind-worker:/# exit
exit

With that both our nodes are now successfully upgraded from Kubernetes version v1.26.4 to v1.27.2


References:

June 08, 2023 12:00 AM

CSS: Combinators

In CSS, combinators are used to select content by combining selectors in specific relationships. There are different types of relationships that can be used to combine selectors.

Descendant combinator

The descendant combinator is represented by a space “ ” and typically used between two selectors. It selects the second selector if the first selector is the ancestor (parent, parent parent's) element. These selectors are called the descendant selectors.

.cover p {
    color: red;
}
<div class="cover"><p>Text in .cover</p></div>
<p>Text not in .cover</p>

In this example, the text “Text in .cover” will be displayed in red.

Child combinators

The child combinator is represented by “>” and is used between two selectors. In this, an element is only selected if the second selector is the direct child of the first selector element. This means there should not be any other selector between the first selector element and second element selector.

ul > li {
    border-top: 5px solid red;
} 
<ul>
    <li>Unordered item</li>
    <li>Unordered item
        <ol>
            <li>Item 1</li>
            <li>Item 2</li>
        </ol>
    </li>
</ul>

In this example, the <li> element with the text “Unordered item” will have a red top border.

Adjacent sibling combinator

The adjacent sibling combinator is represented by “+” is placed between the two CSS selector. In this element is selected if the selector element is directly followed by the first element selector or only the adjacent sibling

h1 + span {
    font-weight: bold;
    background-color: #333;
    color: #fff;
    padding: .5em;
}
<div>
    <h1>A heading</h1>
    <span>Veggies es bonus vobis, proinde vos postulo essum magis kohlrabi welsh onion daikon amaranth tatsoi tomatillo
            melon azuki bean garlic.</span>

    <span>Gumbo beet greens corn soko endive gumbo gourd. Parsley shallot courgette tatsoi pea sprouts fava bean collard
            greens dandelion okra wakame tomato. Dandelion cucumber earthnut pea peanut soko zucchini.</span>
</div>

In this example, the first element will have the given CSS properties.

General sibling combinator

The general sibling combinator is represented by “~“. It selects all the sibling element, not only the direct sibling element, then we use the general sibling combinator.

h1 ~ h2 {
    font-weight: bold;
    background-color: #333;
    color: #fff;
    padding: .5em;
}
<article>
    <h1>A heading</h1>
    <h2>I am a paragraph.</h2>
    <div>I am a div</div>
    <h2>I am another paragraph.</h2>
</article>

In this example, every <h2> element will have the given CSS properties.

CSS combinators provide powerful ways to select and style content based on their relationships in the HTML structure. By understanding combinators, we can create clean, maintainable, and responsive web designs.

Cheers!

ReferencesMDN Web Docs

#CSS #Combinators #WebDevelopment #FrontendDev

May 28, 2023 02:42 PM

Understanding massive ZeroDay impacting Dogecoin and 280+ networks including Litecoin and Zcash

Halborn discovered a massive #ZeroDay vulnerability code-named Rab13s impacting Dogecoin and 280+ networks, including Litecoin and Zcash, putting over $25 Billion of digital assets at risk.

To understand the zero-day vulnerability Rab13s, we need to go through some basic concepts. So I would like to explain blockchain and its key characteristics.

Blockchain is a data structure used to represent a cryptocurrency. Stores data in a way that allows multiple parties to access it reliably without having to trust one another.

The key characteristics of blockchain are:- 

Decentralized control: Communal consensus, rather than one party’s decision, dictates who gets to access or update the blockchain.

Tamper-evidence: It’s immediately obvious if data stored on the blockchain has been tampered with.

Nakamoto consensus: One has to provably spend resources when updating the blockchain.

Now as we know the basics of blockchain, we can learn about how transactions are added to the blockchain.

As the new transactions happen on the blockchain, they are bundled into “blocks, ” which are added to the blockchain with backlinks to enforce the order. Then the Data is stored by, and updates are broadcasted to, everyone

This images illustrate a series of transactions from various sources, each represented by a unique cartoon character. 

Once a transaction is initiated, it is added to a queue to be processed and added to a block. On the left side of the image, multiple blocks are shown in a line, with transactions being selected for inclusion based on the fees paid. Transactions with higher fees are prioritized and added to the current block, while those with lower fees may have to wait for the next block.

As the block reaches its capacity, all transactions contained within it are finalized and added to the blockchain. This process ensures that all transactions are validated and securely recorded while incentivising miners to prioritize high-value transactions. Overall, this system helps ensure the blockchain network’s integrity and efficiency.

Now that we know how transactions are added to the blockchain, you might want to know who checks and verifies the transactions before adding blocks to the blockchain.

Mining is the process that Bitcoin and several other cryptocurrencies use to generate new coins and verify new transactions. It involves vast, decentralized networks of computers around the world that verify and secure blockchains – the virtual ledgers that document cryptocurrency transactions. In return for contributing their processing power, computers on the network are rewarded with new coins. It’s a virtuous circle: the miners maintain and secure the blockchain, the blockchain awards the coins, and the coins incentivise miners to maintain the blockchain.

The high-level view of mining:- 

  1. Download the entire Bitcoin blockchain
  2. Verify incoming transactions 
  3. Create a block 
  4. Do work, here it bunch of pointless brute-force computations (Find a valid nonce) 
  5. Broadcast your block 
  6. Get the Reward 

Blockchain nodes play a critical role in maintaining the integrity of the blockchain and they are network stakeholders, and their devices are authorized to keep track of the distributed ledger and serve as communication hubs for various network tasks.

To add blocks to the blockchain, the nodes that are responsible for mining must come to a consensus. Multiple nodes are attempting to obtain the reward, so they must reach a mutual agreement before a block can be added to the blockchain.

Consensus refers to the process of reaching an agreement among participants in a network. In the context of a ledger of transactions, the consensus is required to agree on any changes made to it.

For instance, in the case where Alice promises 1 BTC to Bob in one transaction and the same 1 BTC to Carol in another, this creates a double spending attack that needs to be resolved through consensus.

An individual attempts to spend the same cryptocurrency twice in a double-spend attack. To execute this attack successfully, the attacker must control most of the network’s computing power, typically around 51%.

If Alice were to attempt a double spend attack, she would need to control the majority of the network’s computing power. Otherwise, the rest of the network would not accept her version of the blockchain.

It’s important to note that if anyone could create a node and add blocks to the network, it would make the network vulnerable to attacks from individuals seeking to manipulate the ledger for their benefit. To prevent this, the Nakamoto consensus protocol is used.

The Nakamoto consensus is a specific method used in the Bitcoin network to achieve consensus among participants. It involves requiring network participants to perform resource-intensive computations to add new blocks to the blockchain and to validate transactions.

This process of performing computationally-intensive tasks is not pointless; it serves as a way to prevent malicious actors from taking over the network and manipulating the ledger. The network can maintain security and prevent attacks by requiring participants to demonstrate that they have put in significant effort to add new blocks.  Doing so ensures that participants in the network have significant computational power and makes it more difficult for any individual to manipulate the ledger.

The image shows Alice shares her version of the blockchain in the image with the network. However, after completing their computations, Dan, Carol, and Bob reject her version of the blockchain. At the same time, other proxy nodes controlled by Alice are still performing their computations and attempting to vote.

Despite Alice’s efforts, the network ultimately rejects her version of the blockchain through the process of consensus. By majority vote, the network agrees to maintain the existing version of the blockchain, which is considered the most accurate and valid record of the network’s transactions.

So now I think you can understand the zero-day, which impacts most of the blockchains. According to Holborn, The most critical vulnerability discovered is related to peer-to-peer (p2p) communications, where attackers can craft consensus messages and send it to individual nodes, taking them offline.

Another zero-day identified by Halborn was uniquely related to #Dogecoin, including an RPC vulnerability impacting individual miners.

Subsequently, variants of these 0 days were also discovered in similar blockchain networks, potentially leading to DoS or RCE attacks.

We can understand the vulnerabilities mentioned above in terms of our alice’s example

As per Holborn, Alice now has the ability to send consensus messages that can cause the receiving nodes to shut down and disconnect from the network.

If a significant portion of the nodes in the blockchain network are offline, Alice could potentially gain control of the majority of the remaining nodes. With this level of control, Alice could attempt to launch a 51% attack on the network, which would allow her to carry out double-spending attacks and manipulate the blockchain’s transaction history.

Halborn was hired to audit dogecoin’s code, and due to the severity of the issue, they did not release technical or exploit details at the time. I attempted to read their commit history messages to understand the technical details better. However, I found the relevant details in their latest update security improvement section. This release contains multiple security-related fixes:

  • The alert system has been removed, and the processing of alert messages has been disabled
  • The transaction download system has been made more reliable
  • The protocol implementation has been amended to reject buggy or malformed messages
  • Memory management in events of high network traffic or when connected to extremely slow peers has been improved

I believe the malicious actor was utilizing the alert system to send consensus messages, allowing them to perform remote command execution on the affected node. This allowed them to shut down the node or execute any other command they desired. The protocol has been amended to reject any buggy or malformed messages to prevent this from happening again. 

Links 

https://github.com/dogecoin/dogecoin/releases/

https://github.com/dogecoin/dogecoin/blob/master/doc/release-notes.md

https://github.com/dogecoin/dogecoin/commits/master?after=3a29ba6d497cd1d0a32ecb039da0d35ea43c9c85+139&branch=master&qualified_name=refs%2Fheads%2Fmaster

https://www.litebit.eu/en/dogecoin-fixed-a-vulnerability-that-persists-in-over-280-other-networks

https://dogecoin.com/dogepedia/how-tos/operating-a-node/#:~:text=Go%20to%20Dogecoin%20Core%20%2D%3E%20Preferences,Core%20on%20system%20login%E2%80%9D%20option.

by shivam at May 23, 2023 05:41 AM

Understanding massive ZeroDay impacting Dogecoin and 280+ networks including Litecoin and Zcash

Halborn discovered a massive #ZeroDay vulnerability code-named Rab13s impacting Dogecoin and 280+ networks, including Litecoin and Zcash, putting over $25 Billion of digital assets at risk.

To understand the zero-day vulnerability Rab13s, we need to go through some basic concepts. So I would like to explain blockchain and its key characteristics.

Blockchain is a data structure used to represent a cryptocurrency. Stores data in a way that allows multiple parties to access it reliably without having to trust one another.

The key characteristics of blockchain are:- 

Decentralized control: Communal consensus, rather than one party’s decision, dictates who gets to access or update the blockchain.

Tamper-evidence: It’s immediately obvious if data stored on the blockchain has been tampered with.

Nakamoto consensus: One has to provably spend resources when updating the blockchain.

Now as we know the basics of blockchain, we can learn about how transactions are added to the blockchain.

As the new transactions happen on the blockchain, they are bundled into “blocks, ” which are added to the blockchain with backlinks to enforce the order. Then the Data is stored by, and updates are broadcasted to, everyone

This images illustrate a series of transactions from various sources, each represented by a unique cartoon character. 

Once a transaction is initiated, it is added to a queue to be processed and added to a block. On the left side of the image, multiple blocks are shown in a line, with transactions being selected for inclusion based on the fees paid. Transactions with higher fees are prioritized and added to the current block, while those with lower fees may have to wait for the next block.

As the block reaches its capacity, all transactions contained within it are finalized and added to the blockchain. This process ensures that all transactions are validated and securely recorded while incentivising miners to prioritize high-value transactions. Overall, this system helps ensure the blockchain network’s integrity and efficiency.

Now that we know how transactions are added to the blockchain, you might want to know who checks and verifies the transactions before adding blocks to the blockchain.

Mining is the process that Bitcoin and several other cryptocurrencies use to generate new coins and verify new transactions. It involves vast, decentralized networks of computers around the world that verify and secure blockchains – the virtual ledgers that document cryptocurrency transactions. In return for contributing their processing power, computers on the network are rewarded with new coins. It’s a virtuous circle: the miners maintain and secure the blockchain, the blockchain awards the coins, and the coins incentivise miners to maintain the blockchain.

The high-level view of mining:- 

  1. Download the entire Bitcoin blockchain
  2. Verify incoming transactions 
  3. Create a block 
  4. Do work, here it bunch of pointless brute-force computations (Find a valid nonce) 
  5. Broadcast your block 
  6. Get the Reward 

Blockchain nodes play a critical role in maintaining the integrity of the blockchain and they are network stakeholders, and their devices are authorized to keep track of the distributed ledger and serve as communication hubs for various network tasks.

To add blocks to the blockchain, the nodes that are responsible for mining must come to a consensus. Multiple nodes are attempting to obtain the reward, so they must reach a mutual agreement before a block can be added to the blockchain.

Consensus refers to the process of reaching an agreement among participants in a network. In the context of a ledger of transactions, the consensus is required to agree on any changes made to it.

For instance, in the case where Alice promises 1 BTC to Bob in one transaction and the same 1 BTC to Carol in another, this creates a double spending attack that needs to be resolved through consensus.

An individual attempts to spend the same cryptocurrency twice in a double-spend attack. To execute this attack successfully, the attacker must control most of the network’s computing power, typically around 51%.

If Alice were to attempt a double spend attack, she would need to control the majority of the network’s computing power. Otherwise, the rest of the network would not accept her version of the blockchain.

It’s important to note that if anyone could create a node and add blocks to the network, it would make the network vulnerable to attacks from individuals seeking to manipulate the ledger for their benefit. To prevent this, the Nakamoto consensus protocol is used.

The Nakamoto consensus is a specific method used in the Bitcoin network to achieve consensus among participants. It involves requiring network participants to perform resource-intensive computations to add new blocks to the blockchain and to validate transactions.

This process of performing computationally-intensive tasks is not pointless; it serves as a way to prevent malicious actors from taking over the network and manipulating the ledger. The network can maintain security and prevent attacks by requiring participants to demonstrate that they have put in significant effort to add new blocks.  Doing so ensures that participants in the network have significant computational power and makes it more difficult for any individual to manipulate the ledger.

The image shows Alice shares her version of the blockchain in the image with the network. However, after completing their computations, Dan, Carol, and Bob reject her version of the blockchain. At the same time, other proxy nodes controlled by Alice are still performing their computations and attempting to vote.

Despite Alice’s efforts, the network ultimately rejects her version of the blockchain through the process of consensus. By majority vote, the network agrees to maintain the existing version of the blockchain, which is considered the most accurate and valid record of the network’s transactions.

So now I think you can understand the zero-day, which impacts most of the blockchains. According to Holborn, The most critical vulnerability discovered is related to peer-to-peer (p2p) communications, where attackers can craft consensus messages and send it to individual nodes, taking them offline.

Another zero-day identified by Halborn was uniquely related to #Dogecoin, including an RPC vulnerability impacting individual miners.

Subsequently, variants of these 0 days were also discovered in similar blockchain networks, potentially leading to DoS or RCE attacks.

We can understand the vulnerabilities mentioned above in terms of our alice’s example

As per Holborn, Alice now has the ability to send consensus messages that can cause the receiving nodes to shut down and disconnect from the network.

If a significant portion of the nodes in the blockchain network are offline, Alice could potentially gain control of the majority of the remaining nodes. With this level of control, Alice could attempt to launch a 51% attack on the network, which would allow her to carry out double-spending attacks and manipulate the blockchain’s transaction history.

Halborn was hired to audit dogecoin’s code, and due to the severity of the issue, they did not release technical or exploit details at the time. I attempted to read their commit history messages to understand the technical details better. However, I found the relevant details in their latest update security improvement section. This release contains multiple security-related fixes:

  • The alert system has been removed, and the processing of alert messages has been disabled
  • The transaction download system has been made more reliable
  • The protocol implementation has been amended to reject buggy or malformed messages
  • Memory management in events of high network traffic or when connected to extremely slow peers has been improved

I believe the malicious actor was utilizing the alert system to send consensus messages, allowing them to perform remote command execution on the affected node. This allowed them to shut down the node or execute any other command they desired. The protocol has been amended to reject any buggy or malformed messages to prevent this from happening again. 

Links 

https://github.com/dogecoin/dogecoin/releases/

https://github.com/dogecoin/dogecoin/blob/master/doc/release-notes.md

https://github.com/dogecoin/dogecoin/commits/master?after=3a29ba6d497cd1d0a32ecb039da0d35ea43c9c85+139&branch=master&qualified_name=refs%2Fheads%2Fmaster

https://www.litebit.eu/en/dogecoin-fixed-a-vulnerability-that-persists-in-over-280-other-networks

https://dogecoin.com/dogepedia/how-tos/operating-a-node/#:~:text=Go%20to%20Dogecoin%20Core%20%2D%3E%20Preferences,Core%20on%20system%20login%E2%80%9D%20option.

by shivam at May 23, 2023 05:41 AM

What is stopping us from using free software?

I had a funny day yesterday

I'll start with the evening, that I spend as tutor in the RoboLab; a workshop for kids aged 10-18 to build their own robot with some 3d printed parts, an ESP and everything you need of electric equipment to make some wheels move. It's a great project and I have much respect for the people who initiated and still maintain it in their free time with the children.

The space we can use for the project is called Digitallabor (digital lab) and offers anything you would want from a well equipped maker space, including a shelf full of laptops to use while you're working there.

I should not be surprised anymore, but I can't help it

Of course, all laptops run Windows. I picked one, booted up and and saw the fully bloated and ad-loaded standard installation of Windows 10. Last time a search for updates had been performed: early 2019. No customized privacy settings, nothing. Just the standard installation in all its ugliness.

I asked the people running the space why. Why? As this would be the perfect place to introduce the children to free software and even shed some light upon the difference between Free and Open Source Software and proprietary, user despising spyware (of course I did ask in a somewhat more diplomatic manner).

The answers: "The children are used to it.", "It's easier to maintain."

Yes. So last search for updates 2019. That's well maintained.

Regarding the "the children are used to it": I can confirm that children don't give a shit. If it runs Minetest then it's fine. If they have access to a PC or laptop at home at all, because in my experience most of the kids nowadays have exactly two digital media skills anyway: tapping and swiping. So this would be the perfect place to introduce them to free alternatives!

The morning was different

We're a small company with only ~12 employees, most of which are rather non-technical. So there is no IT department. Or in other words: I am the IT department. And our IT department finds it is no longer responsible to run Windows on business PCs (at least in the world outside the US). So yesterday I prepared a new PC with Fedora 38, brought it to my colleagues and asked: "Who dares to try this Linux Desktop?"

Guess who stepped forward instantly and said, "I can do that"? My ~60 year old colleage who was a medical technical assistent when I wasn't even born and a life-long Windows user. We did the initiall configuration, synced her mails and calendars, set-up printers and network drives and went through the most important peculiarities of the GNOME3 desktop. It took about 90 minutes and then she said "I guess I'm fine from here. I'll play around with this a bit to get used to the new apps". I promised her first-level support but she was working without any issues the whole day.

I'm really proud of her

So many people keep telling me it would be too hard, too much reorientation to switch operating systems, but moments like that show me that the problem may lie somewhere else. People are afraid of changes. People want to spare the effort. But I think that daring to make a change instead of doing nothing despite better knowledge will be rewarded. The next desktop PC is already prepared, so next week I will ask the question again :)

by Robin Schubert at April 28, 2023 12:00 AM

Google Open Source Peer Bonus Award 2023

I am honored to be a recipient of the Google Open Source Peer Bonus 2023. Thank you Rick Viscomi for nominating me for my work with the Web Almanac 2022 project. I was the author of Security and Accessibility chapters of the Web Almanac 2022.

Google Open Source Peer Bonus 2023 Letter. Dated April 19, 2023. Dear Saptak Sengupta, On behalf of Google Open Source, I would like to thank you for your contribution to 2022 Web Almanac. We are honored to present you with a Google Open Source Peer Bonus. Inside the company, Googlers can give a similar bonus to each other for going above and beyond, so this is just a small way of saying thank you for your hard work and contributions to open source. We hope you enjoy this gift from all of us at Google and Rick Viscomi who nominated you. Thank you again for supporting open source! We look forward to your continued contributions. Best regards, Chris DiBona, Director of Google Open Source

For the last year, I have started to spend more time in contributing, maintaining and creating Open Source project and reduced the amount of contracts I usually would do. So this letter of appreciated feels great and helps me get an additional boost in continuing to do Open Source Projects.

Some of the other Open Source projects that I have been contributing and trying to spend more time on are:

In case someone is interested in supporting me to continue doing open source projects focused towards security, privacy and accessibility, I also created a GitHub Sponsors account.

April 20, 2023 07:32 AM

Converting HTML Tables to CSV

Today, I decided to analyze my bank account statement by downloading it from the day I opened my bank account. To my surprise, it was presented as a web page. Initially, my inner developer urged me to write code to scrape that data. However, feeling a bit lazy, I postponed doing so.

Later in the evening, I searched the web to find an alternate way to extract the data and discovered that HTML tables can be converted to CSV files. All I had to do was save the code in CSV format. I opened the Chrome browser's inspect code feature, copied the table, saved it with the CSV extension, and then opened the file with LibreOffice. Voila! I had the spreadsheet with all my transactions.

Cheers!

#TIL #CSV #HTML Table

April 15, 2023 05:41 PM

Mastering Async Communication in a Remote World

This is one of my favorite posts/documents I have written. I wrote it during the pandemic (2020–21), when InfraCloud, the organization I work with, decided to go fully remote. It was published at infracloud.io on 11th April 2023: Mastering Async Communication in a Remote World. As a remote first organization, we encourage everyone to follow asynchronous communication while working with our peers and customers at InfraCloud. This article about writing better messages is directly from our internal handbook.
by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at April 10, 2023 06:30 PM

Thank you, my VMware team!

December 12, 2022


Dear Team,

As my last day at VMware approaches, I wanted to take a moment to thank each and every one of you for the support and guidance you have given me during my time at VMware.

To dims and Navid, I am especially grateful for helping me join the great organisation and for your ongoing support. Thank you for making me feel welcomed and valued from day one.

Nikhita, your support and sponsorship have been invaluable in helping me grow in my career. You are not only a great colleague, but also a wonderful friend inside and outside of VMware. I truly mean it when I say that YOU ARE MY ROLE MODEL.

Meghana, thank you for being an amazing onboarding buddy and for being there for me through every challenge and success. Your friendship, kindness and selflessness means the world to me.

Arka and Yash, thank you for being amazing work partners and for the countless long troubleshooting and learning sessions we had together. I will miss working with you.

Nabarun, thank you for being an exceptional mentor and guiding me not only on technical matters, but also providing valuable advice and teaching me important soft skills.

Madhav, thank you for being such a kind-hearted person and always supporting me and cheering me on.

Anusha, Christian, Prasad, Akhil, Arnaud, Rajas, and Amit, thank you for sharing your wealth of professional experience with me and especially, for teaching me what it means to work hard. It has been an absolute honor to work with each of you, even if for a short time.

Finally, Kriti, Kiran, and Gaurav, thank you for supporting me throughout my journey at VMware.

Andrew, Dominik, Peri, Sayali, I never could have imagined finding such wonderful friends at VMware. I will deeply miss you all. Your friendship means so much to me.

Thank you all for being such a great team. I will always treasure the memories and the lessons I have learned here.

Best regards,

Priyanka Saggu


PS:

It’s amazing that the “DREAM TEAM” tweet that you posted about years ago, Nikhita, actually came together for me and I got to work with you. It’s still hard to believe it actually happened. Honestly, I’m feeling very emotional after typing this. Thank you for all your support always! ❤️

Screenshot 2022-12-12 at 11 51 20 AM

December 12, 2022 12:00 AM

My first custom Fail2Ban filter

On my servers that are meant to be world-accessible, the first things I set up are the firewall and Fail2Ban, a service that updates my firewall rules automatically to reject requests from IP addresses that have failed repeatedly before. The ban duration and number of failed attempts that trigger a ban can easily be customized; that way, bots- and hacker attacks that try to break into my system via brute force and trial and error can be blocked or at least delayed very effectively.

Luckily, many pre-defined and modules and filters already exist that I can use to secure my offered services. To set up a jail for sshd for instance and do some minor configurations, I only need a few lines in my /etc/fail2ban/jail.local file:

[DEFAULT]
bantime  = 4w
findtime = 1h
maxretry = 2
ignoreip  = 127.0.0.1/8 192.168.0.1/24


[sshd]
enabled   = true
maxretry  = 1
findtime  = 1d

Just be aware that you should not change /etc/fail2ban/jail.conf, as this will be overwritten by fail2ban. If a jail.local is not already present, create one.

As you can see, I set some default options about how long IPs should be banned and after how many failed tries. I also exclude local IP ranges from bans, so I'll not lock myself out every time I test a new service or setting. However, for sshd I even tighten the rules a bit, since I only use public key authentication where I don't expect a single failure from a client that is allowed to connect. All the others can happily be sent to jail.

It's always a joy but also kind of terrifying to check the jail for the currently banned IPs; the internet is not what I would call a safe place.


sudo fail2ban-client status sshd
Status for the jail:
|- Filter
|  |- Currently failed: 0
|  |- Total failed:     211
|  `- File list:        /var/log/auth.log
`- Actions
   |- Currently banned: 2016
   |- Total banned:     2202
   `- Banned IP list: ...

My own filter

Do identify IP addresses that should be banned, Fail2Ban scans the appropriate log files for failed attempts with a regular expression, as the sshd module does with my /var/log/auth.log.

Like mentioned above, there are already quite some pre-defined modules. For my nginx reverse proxy the modules nginx-botsearch, nginx-http-auth and nginx-limit-req are available; the log files they scan by default is /var/log/nginx/error.log.

However, having a look in my /var/log/nginx/access.log I regularly find lots of failed attempts that are probing my infrastructure. They look like this:

118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.7/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.4/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysqladmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/myadmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.1.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.9.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.8.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.0/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
185.183.122.143 - - [30/Sep/2022:01:19:48 +0200] "GET /wp-login.php HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:96.0) Gecko/20100101 Firefox/96"
198.98.59.132 - - [30/Sep/2022:01:51:59 +0200] "POST /boaform/admin/formLogin HTTP/1.1" 404 134 "http://xxx.xxx.xxx.xxx:80/admin/login.asp" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /.env HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /config.json HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /.git/config HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"

I don't use PhpMyAdmin and I don't host a wordpress site (requests to wp-login and wp-admin are pretty common) and I would prefer to ban IPs that scan my infrastructure for these services. So I wrote a new filter to scan my nginx/access.log file for requests of that kind.

In /etc/fail2ban/filters.d/nginx-access.conf I added the following definition:

[Definition]

_daemon = nginx-access
failregex = (?i)^<HOST> .*(wp-login|xmlrpc|wp-admin|wp-content|phpmyadmin|mysql).* (404|403)
  • (?i) makes the whole regular expression case insensitive, so it will capture phpmyadmin and PhpMyAdmin equally.
  • ^<HOST> will look from the start of each line to the first space for the IP address. <HOST> is a defined capture group from Fail2Ban, that must be present in failregexes to let Fail2Ban know who to ban.
  • .* matches any character, and an arbitrary number of them
  • (wp-login|wp-admin...) these are the request snippets to look for; in parentheses and separated with the pipe operator, it will look for matches of either of the given strings.
  • (404|403) are http responses for "file/page not found" and "forbidden". So if these pages are not available or not meant to be accessed, this rule will be triggered.

In my jail.local I add the following section to use the new filter:

[nginx-access]
enabled   = true
port      = http,https
filter    = nginx-access
logpath   = /var/log/nginx/access.log

Restart the fail2ban service (e.g. systemctl restart fail2ban) to enable the new rule.

I started with only a few keywords to filter, but the used regular expression can easily be expanded by further terms.

by Robin Schubert at October 01, 2022 12:00 AM

Hope - Journal

This is accompanied with Hope

This is to maintain a daily journal of the efforts. This is to be seen from bottom to top.

To explain the jargons:

  • cw: current weight
  • gw: goal weight

Ok, lets start.

September 12, 2022

- cw: 80.3 kgs

Starting off things again, After coming back to Bangalore, I had a tough time setting things up, so mostly was eating food from outside and gained like 10kgs in the past month, haha!

This time it’s bit different from the last as I will be hitting the gym, and walking as well. Let’s see how it goes!

To begin, I started with a 15min walk on treadmill with 6kmph and incline 10. In the evening, I went for a walk of 1.53km with a pace of 10:05min/km

I chugged a total of 1.25L of water yesterday. Need to bring this up to 4L a day.

September 12, 2022 12:00 AM

How I started programming

My three kids are now 5, 7 and 9 years old. Meanwhile, they all have their own rooms which, for the two older school kids, contain desks with a Raspberry Pi 400 on them. They use it to look up pictures of Pokemon, to listen music and to play minetest, supertuxkart or the secret of monkey island :) Well, they also used for joining classes remotely during lockdown things.

My primary intention was, to make the computer accessible so - whenever their interest arouses - they could play around and discover on their own. Today I think that the number of possibilities may be way too high to just sit down and start with something specific.

I actually consider to run the Pis in some kind of kiosk mode, to reduce distraction. I remember that, on the first computer I used, we ran one program at a time. If you decided to run another program, you would turn off the computer, change the floppy and restart. Of course it's nice to have multiple things running at once on a computer, but to learn something new, I would argue that running one thing and only this one thing, might be best.

Our first family computer

Thinking back when I was their age, it must have been the time when my father received an old Amstrad PCW (Joyce) from a friend, our first computer.

I was fascinated by that machine, I loved the green on black text and the different noises it made - especially the dot matrix printer noises :D My father used it for word processing and because that was all he needed, it was all he ever tried. I also loved editing text in locoscript (which was just awesome) and playing the few games that were available.

However, the Joyce came with a BASIC and with the Logo programming language. I had no idea what either of them was, nor had anyone in our family. So one day I grabbed the manuals (which luckily were in German) and started learning Logo and running the examples until I was able to draw my own little pictures. In a playful manner I learned the concepts of algorithms; of variables, loops and subroutines.

At that time, BASIC was still incomprehensible for me. This changed when my parents, which wanted to foster my interest but didn't quite know how, gifted me a VTech SL, an educational computer that could not really do much, but came with a BASIC and a manual that was actually appropriate for children and that I could follow along nicely. So I soon had plenty of those little programs that would ask you for your name or age and then make funny comments about that. Always, my main motivation to write code was to eventually develop a cool game. Good for me that some of my friends shared that interest and one in particular I considered a real programming wizard.

Interest amplification through friends

When I was young, LAN parties were the real thing. I saved money for a then cheap Medion PC - an Intel Pentium D with an NVidia RIVA TNT graphics card. The only condition my parents put upon me was, that I would have pass an official typewriter's course - "The computer is not just a toy, learn touch-typing so you can use it for work/school".

So you would carry over your midi tower and 17 inch CRT monitor and a box of cables to a friend's basement and forget about daytime and the rest of the world for one weekend over Duke Nukem 3D, Starcraft and Jedi Knight - Dark Forces 2. The friend at whom we met was two years older and first impressed me when we were missing the last BNC terminator to finalize or LAN connection (yes, that's when before the time of Ethernet, all PCs had to be hooked up in line, connected by a coaxial cable and cleanly terminated on both ends). So he grabbed an ohmmeter, measured the resistance the terminator had to have, found a fitting resistor in a drawer and bent it into shape to close our network connection.

He was regularly programming in Pascal and I was blown away when he showed us his self-written window manager/desktop environment. It could not do too much, but show files as icons which you could nicely customize in color, but to me it was magic. Together we installed Borland Pascal on my machine and he showed me how to use the built-in documentation system. However, my English skills at that time were simply not good enough to really make sense of that excellent documentation. So I couldn't wait for the computer science course in school to start.

Two extremes of school computer science

Computer science. Awesome! I was so excited about it, that it hurt even more when we realized that it would be a complete disappointment. The first "computer science" course I had in school was nothing but a Microsoft Word/Excel/Powerpoint introduction, and not even a good one. Well, we endured and in the next year the teacher changed and so did the course. And that may have been the best class I've had, ever.

The new computer science teacher was also physics teacher and was not too famous with the kids. He had a quite nerdy 70s look which I appreciate today, but was inscrutable for us when we were young, and a funny name that translates to "beef". However, the topics he covered and the hands-on way he taught them were just great. Within two years we started with the basics of the Pascal programming language and the workings of computer algorithms with a Logo-like environment. After that we switched over to abstract data types (queues, lists, linked lists, trees etc.), computer architecture down to a level of "what does an ALU do, and how?", and finally we wrote our own assembly code to draw icons and images on the screen. That must have been in old unprotected mode, where you could just write into the video adapter memory directly, which was mapped into the PC's memory.

Soon enough we found us bumming instruction code lines from our assembly programs to find the most elegant and shortest solution to a problem, looking over each other's shoulders and admiring clever tricks. When I read Steven Levy's Hackers many years later, I perfectly remembered that feeling when reading about the first MIT hackers, hacking on the PDP-1.

We finished the course with a group project: We developed an idea for a 2D racing game we called "Geisterfahrer" (wrong-way driver) where the player had to dodge oncoming traffic. We identified the different tasks we had to do, planned what routines needed to be programmed and assigned teams. It didn't work out well, but hey, the concept was superb.

College, work and DGPLUG

I hate to admit it, but back then in my school days, I didn't like the computer science course very much. I simply could not appreciate the value of these lessons; I was bored by abstract data types, didn't know what I would ever need computer architecture knowledge for and was a bad team player in our final programming task. Only when I was in college and studied physics and computer science, I realized just how good this school course has been. In two years at college we covered exactly the same topics going just as deep, but this time I was in a course with ~200 people instead of just 20.

I learned Java and C/C++ basics at college, and when I applied for a project to write my bachelor's thesis, I was looking for programming tasks in physics working groups; there were and still are plenty of them. I did the same when I started my master's thesis, this time programming in Java and C# (just because the syntax was similar but the performance was way better), and after that once again the same to find a PhD position - this time in a medical field. I started to learn Python with Mark Pilgrim's Dive Into Python, which was an excellent choice for me, because it gave plenty of examples and comparisons with other programming languages I already knew.

There's not much interesting to say from that era except one thing: In terms of programming, I was still a bad team player. The code I wrote was hard to maintain; I wrote it alone and I wrote it for me to work. I imagine the poor people coming to the working groups to continue my work had hard times. I simply never learned how to collaboratively develop software - this part was actually not covered in college.

This only changed when I learned about DGPLUG and the summertraining, where - as I read - people were taught what need be known to start contributing to Open Source projects. I've written about that project before and every summer I realize how much it has changed the way I work today for the better. And it is only now, that I feel like I almost know what I am doing, and why, when I write code.

by Robin Schubert at September 02, 2022 12:00 AM

The Debug Diary - Chapter I

The Debug Diary – Chapter I

Lately, I was debugging an issue with the importer tasks of our codebase and came across a code block which looks fine but makes an extra database query in the loop. When you have a look at the Django ORM query

jato_vehicles = JatoVehicle.objects.filter(
    year__in=available_years,<more_filters>
).only("manufacturer_code", "uid", "year", "model", "trim")

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
        <logic>
    ymt_key = (entry.year, entry.model, entry.trim_processed)
...

you will notice we are using only, which only loads the set of fields mentioned and deferred other fields, but in the loop, we are using the field trim_processed which is a deferred field and will result in an extra database call.

Now, as we have identified the performance issue, the best way to handle the cases like this is to use values or values_list. The use of only should be discouraged in the cases like these.

Update code will look like this

jato_vehicles = JatoVehicle.objects.filter(
    year__in=available_years,<more-filters>).values_list(
    "manufacturer_code",
    "uid",
    "year",
    "model",
    "trim_processed",
    named=True,
)

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
        <logic>
    ymt_key = (entry.year, entry.model, entry.trim_processed)
...

By doing this, we are safe from accessing the fields which are not mentioned in the values_list. If anyone tries to do so, an exception will be raised.

** By using named=True we get the result as a named tuple which makes it easy to access the values :)

Cheers!

#Django #ORM #Debug

August 30, 2022 07:34 AM

LDAP authentication on Home Assistant

Last week I wrote a few sentences about a beautiful script I found, to authenticate against an LDAP server, which could be used e.g. on the Home Assistant, a platform to manage home automation and the like. We deployed a Home Assistant instance at work, to monitor temperatures in various rooms and fringes, and to raise notifications and alarms, should temperatures exceed certain thresholds. All team members should be able to log into the system, using their central login credentials from the LDAP server.

Unforeseen difficulties

The shell script uses either of the command line utilities ldapsearch (from the openldap-clients package) or curl to make a request to the LDAP server, which requires a valid username and password. Both scripts will return an error code > 0 if something goes wrong; as usual, the exit code 0 will let us know if the command worked and thus if the username/password combination was correct. Further, the LDAP server can be queried for some extra attributes like the displayName or others, which can be mapped into the requesting system.

However, there was one issue I hadn't anticipated; neither ldapsearch nor curl compiled with LDAP support was available on the Home Assistant.

There are plenty of ways to deploy Home Assistant. We had a spare Raspberry Pi and decided to use the HassOS distribution that is recommended when installing on a Pi. HassOS (the Home Assistant Operating System) is a minimalistic operating system that deploys the individual modules of Home Assistant as containers. The containers that are deployed are usually built on Alpine images. However, there were two problems:

  1. Software that I would install in any container would not be persistent but vanish on every re-boot.
  2. I couldn't even locate, let alone access the correct container that does the authentication.

Trial and error

As proof of concept, I installed an SSH integration that would at least let me communicate with parts of the Home Assistant system via ssh. The ssh container per default also mounts the config and other persistent directories of Home Assistant.

So I downloaded the ldap-auth.sh script to the persistent config folder and started by adding the ldapsearch tool, with apk add openldap-clients and configured ldap-auth.sh until I was able to authenticate. I updated the Home Assessment config with an auth_provider section like this:

homeassistant:
  auth_providers:
    - type: command_line
      command: /config/scripts/ldap-auth.sh
      meta: true
    - type: homeassistant

Beware! Do include type: homeassistant in your list of auth providers or you will lock yourself out of the system if the script does not work correctly (just like I did).

After reloading the config, login with the command_line type of course failed, but I didn't find any logs that would propagate the error message, so I added some echo lines in the script manually, to find out that ldapsearch cannot be found by the authenticating container.

So I tried my luck with curl; however I could not make any reasonable request without the built-in LDAP support.

Build my custom curl

So I figured I basically had three possibilities:

  1. Using a different distribution of Home Assistant that I maybe would be able to control better
  2. request the feature of having openldap-clients baked into the container images, or build (and maintain) the image myself or
  3. build curl for my target container with all the needed functions linked statically into one binary.

I assumed that all containers in the Pi's Home Assistant ecosystem would be the same architecture, which is Alpine on aarch64 for the ssh container. So I installed all dependencies I needed on the ssh container, cloned the curl repo and started configuring, installing missing dependencies on the fly.

./configure --with-openssl --with-ldap --disable-shared

Choosing the ssl library is mandatory; --disable-shared should prevent the use of any shared library, so any dependency I had to install that would not be available on the target machine later.

The built went through and I had an LDAP enabled curl that I could test my requests with, so again I tinkered with the ldap-auth.sh script until it would succeed.

However, when used from the web interface it would not work, again, this time complaining about missing dependencies, which I thought I had all included.

Checking the compiled binary I found 769.4K, so much bigger than my 199K system curl, so something must have been linked statically. Looking up shared object dependencies revealed what was missing:

[core-ssh ~]$ ldd curl
        /lib/ld-musl-aarch64.so.1 (0x7f930c0000)
        libssl.so.1.1 => /lib/libssl.so.1.1 (0x7f92f76000)
        libcrypto.so.1.1 => /lib/libcrypto.so.1.1 (0x7f92d26000)
        libldap.so.2 => /lib/libldap.so.2 (0x7f92cc1000)
        liblber.so.2 => /lib/liblber.so.2 (0x7f92ca3000)
        libc.musl-aarch64.so.1 => /lib/ld-musl-aarch64.so.1 (0x7f930c0000)
        libsasl2.so.3 => /lib/libsasl2.so.3 (0x7f92c79000)

While this is still a lot less dependencies than my system installed curl:

=> ldd `which curl`
        linux-vdso.so.1 (0x00007ffc8fdb6000)
        libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fce55263000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007fce55057000)
        libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fce5502c000)
        libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fce5500a000)
        libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fce54fc9000)
        libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fce54fb6000)
        libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fce54f1f000)
        libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fce54c3f000)
        libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fce54bea000)
        libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fce54b41000)
        libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fce54b33000)
        libz.so.1 => /usr/lib/libz.so.1 (0x00007fce54b19000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fce55380000)
        libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fce5496b000)
        libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fce54892000)
        libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fce54862000)
        libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fce5485c000)
        libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fce5484d000)
        libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fce54846000)
        libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fce54831000)
        libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fce5480e000)

there were still way too many shared libraries involved for my taste.

I even asked in #curl in the libera net what I could have done wrong or misunderstood.


14:57:34    schubisu | hi everyone! I'm trying to build a statically linked curl
                     | and configured with `--with-openssl --with-ldap --disable-shared`.
                     | However, when I run the binary on another machine it says
                     | it cannot find the shared libraries libldap and liblber. Did I
                     | misunderstand static linking?
15:27:25      bagder | static linking is a beast

Well, it was nice to hear that it may not have been entirely my fault :) bagder pointed me to Static curl, a github repository that builds static releases for multiple platforms (YAY), but sadly also with disabled LDAP support (AWWW). Running the build script with LDAP enabled also didn't run through.

An ugly hack to the rescue

Having spent way too much time on this issue, I went ahead with something that may be an ugly hack, but it's also a "works for me": I had already copied the statically linked curl in the persistent config folder already, so I would just add the missing libraries there as well.

I figured that from the 7 shared dependencies, 4 were available in the standard Alpine image anyway, so I was missing only three files:

  • libldap.so.2
  • liblber.so.2
  • libsasl2.so.3

that I copied from my ssh container into the persistent storage. I adjusted the ldap-auth.sh script one last time to add one line:

export LD_LIBRARY_PATH="/config/scripts"

and that did the trick.

I also confirmed that on the fresh system after re-boot, everything is still in place and working beautifully :)

by Robin Schubert at August 26, 2022 12:00 AM

Introducing Blogging Friday

It's not that I don't have things to write about, in fact I learn interesting new things every week. I have however never integrated a dedicated time to write new posts in my weekly routine. So to not procrastinate any further, I start Blogging Friday right now with some things I did this week.

Lower the threshold for new posts

I'm using lektor as static site generator; it's lightweight and new posts are really quick to generate. All it takes is a new sub-folder in my blog directory, containing a contents.lr file with a tiny bit of meta information. Apparently this little effort is already enough to trigger my procrastination. So to get this hurdle out of the way a little shell script is quickly written:

#!/usr/bin/env bash
#filename: new_post.sh

if [ -z $1 ]; then
    echo "usage: $0 <title>"
    exit 1
fi

posttitle="$*"
basepath="/home/robin/gitrepos/myserver/blog/content/blog"
postdir=$(echo $posttitle | sed -e "s/ /_/g" | tr "[:upper:]" "[:lower:]")
fullpath="$basepath/$postdir"
postdate=$(date --iso)

if [ -e "$fullpath" ]; then
    echo "file or directory $postdir already exists"
    exit 2
fi

mkdir "$fullpath"
echo "
title: $posttitle
---
pub_date: $postdate
---
author: Robin Schubert
---
tags: miscellaneous, programming
---
status: draft
---
body:
" > "$fullpath/contents.lr"

echo "created empty post: $postdir"

LDAP authentication for random services

I've integrated a few web services in our intranet at work, like a self hosted gitlab server, a zammad ticketing system, nextcloud and the likes. One requirement to integrate well in our ecosystem, is the possibility to authenticate with our OpenLDAP server. Those services I configures so far all had their own way means to authenticate against LDAP; some need external plugins, some are configured in web interfaces and others in configuration files. However, honestly I never understood what they did under the hood.

I had a little epiphany this week, when I tried to integrate a homeassistant instance. Homeassistant does not have a fancy front-end to do this, instead this is realized with a simple shell script. There's an example on github which can be used and is actually not that hard to comprehend.

In summary what is does is to make a request to the LDAP server, either via ldapsearch (part of the openldap-tools package) or curl (needs to be compiled with LDAP integration). An example to make a request with ldapsearch could look like this:

ldapsearch -H ldap://ip.of.ldap.server \
    -b "CN=Users,DC=your,DC=domain,DC=com" \
    -D "CN=Robin Schubert,CN=Users,DC=your,DC=domain,DC=com" \
    -W

Executed from the command line, this will prompt for the user's password and make the request to the server. If everything works fine, the command will exit with exit code 0; if different from 0, the request failed for whatever reason. This result is passed on.

That's it. Nothing new. Why then didn't I think of such a simple solution? The request over ldapsearch can of course be further refined, adding filters and pipe the output through sed to map e.g. display names or groups and roles.

Playing with PGP in Python using PGPy

I was exploring different means to deal with electronic signatures in Python this week. First library I found was python-gnupg; I should have been more suspicious when I saw that the last update has been 4 years ago. They may be calling it pretty bad protocol for a reason. It is a wrapper around the gpg binary, using Python's subprocess to call it. This was not really what I wanted. For similar reasons, Kushal started johnnycanencrypt in 2020; a Python library that interfaces the Rust OpenPGP lib sequoia-pgp and which I'm yet to explore further.

A third option I found is PGPy, a pure Python implementation of OpenPGP. Going through the examples of their documentation it feels straight forward; for the relatively simple use case I have (managing keys, signing and verifying signatures), it should be perfectly usable.

That's been my week

Nothing of what I tried this week was groundbreaking or new, but it either interested me or was keeping me busy in some way. I wonder how statistics would look like if I would count how many times I look up the same issues and problems on the internet. Maybe writing down some of them will help me remember - or at least give me the possibility to look things up offline in my own records ;)

by Robin Schubert at August 19, 2022 12:00 AM

Subscriptions

Planetorium