Aggregated articles from feeds
Thank you Gnome Nautilus scripts
Kushal DasAs I upload photos to various services, I generally resize them as required based on portrait or landscape mode. I used to do that for all the photos in a directory and then pick which ones to use. But, I wanted to do it selectively, open the photos in Gnome Nautilus (Files) application and right click and resize the ones I want.
This week I noticed that I can do that with scripts. Those can be in any given
language, the selected files will be passed as command line arguments, or full
paths will be there in an environment variable
NAUTILUS_SCRIPT_SELECTED_FILE_PATHS
joined via newline character.
To add any script to the right click menu, you just need to place them in
~/.local/share/nautilus/scripts/
directory. They will show up in the right click menu for scripts.
Below is the script I am using to reduce image sizes:
#!/usr/bin/env python3
import os
import sys
import subprocess
from PIL import Image
# paths = os.environ.get("NAUTILUS_SCRIPT_SELECTED_FILE_PATHS", "").split("\n")
paths = sys.argv[1:]
for fpath in paths:
if fpath.endswith(".jpg") or fpath.endswith(".jpeg"):
# Assume that is a photo
try:
img = Image.open(fpath)
# basename = os.path.basename(fpath)
basename = fpath
name, extension = os.path.splitext(basename)
new_name = f"{name}_ac{extension}"
w, h = img.size
# If w > h then it is a landscape photo
if w > h:
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "1024x686", new_name])
else: # It is a portrait photo
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "686x1024", new_name])
except:
# Don't care, continue
pass
You can see it in action (I selected the photos and right clicked, but the recording missed that part):
Breaking out of algorithm
Kushal DasMany of you already know about my love of photography. I am taking photos for many years, mostly people photos. Portraits in conferences like PyCon or Fedora events. I regularly post photos to wikipedia too, specially for the people/accounts which does not have good quality profile photos. I stopped doing photography as we moved to Sweden, digital camera was old and becoming stable in a new country (details in a different future blog) takes time. But, last year Anwesha bought me a new camera, actually two different cameras. And I started taking photos again.
I started regular photos of the weekly climate protests / demonstrations of Fridays for Future Stockholm group. And then more different street protests and some dance/music events too. I don't have a Facebook account and most people asked me to share over Instagram, so I did that. But, as I covered more & more various protests as a photographer, I noticed my Instagram postos are showing up less in people's feeds. Very less. Was wondering different ways of breaking out of the algorithmic restriction.
Pixelfed is a decentralized, federated ActivityPub based system to share photos. I am going to share photos more on this platform, and hoping people will slowly see more. I started my account yesterday.
You can follow me from any standard ActivityPub system, say your mastodon
account itself. Search for @kushal@pixel.kushaldas.photography
or
https://pixel.kushaldas.photography/kushal in your system and you can then follow it
like any other account. If you like the photos, then please share the account
(or this blog post) more to your followers and help me to break out of the
algorithmic restrictions.
In the technology side, the server runs Debian and containers. On my Fedora system I am super happy to add a few scripts for Gnome Files, they help me to resize the selected images before upload (I will write a blog post tomorrow on this).
2024
Jason BraganzaJanuary
- The Boy, the Mole, the Fox and the Horse, Charlie Mackesy*¶
- Number Go Up, Zeke Faux*†
- Dread Knight, Sarah Hawke*
- The Basic Laws of Human Stupidity, Carlo M. Cipolla†
February
- Romans in Space, Episode 412, The Rest is History, Tom Holland & Dominic Sandbrook*†#
- Just William, Richmal Crompton*
- Iron Widow, Xiran Jay Zhao*
- Empire Podcast, Series 5, Episodes 112–125, Empires of Iran, Part II*†#
- The Author Blog: Easy Blogging for Busy Authors, Anne R. Allen†
- Babel, R.F. Kuang*
- The Poppy War, R.F. Kuang*
- The Complete History & Strategy of Hermès, Season 14, Episode 2, Acquired Podcast*†#
- Nintendo’s Origins & Nintendo: The Console Wars, Season 14, Episodes 3 & 4, Acquired Podcast*†#
March
- The Poppy War, R.F. Kuang*
- The Dragon Republic, R.F. Kuang*
- The Burning God, R.F. Kuang*
April
- The Tainted Cup, Robert Jackson Bennett*
- Empire Podcast, Series 6, Episodes 128–132, Buddhism (The Indosphere Part I)*†#
- Foundryside, Robert Jackson Bennett*
- Shorefall, Robert Jackson Bennett*
- Locklands, Robert Jackson Bennett*
- Schaum’s Outline of French Grammar, Mary Coffman Crocker*‡
- 5BX Plan, The Royal Canadian Air Force*‡
- The Pixar Touch, David A. Price*†
- Empire Podcast, Series 7, Episodes 133–145, Queens and Empresses*†#
- The Romance of Lust: A Victorian Erotic Novel, Anonymous
- Sherlock & Co. Podcast, Episodes 1–15*#
May
- Sherlock & Co. Podcast, Episodes 16–35*#
- Empire Podcast, Episodes 148–155, America: The Empire of Liberty*†#
- The Dagger and the Coin - 01 - The Dragon’s Path, Daniel Abraham*
- The Well-Trained Mind: A Guide to Classical Education at Home, Susan Wise Bauer*†
- The Bed of Procrustes, Nicholas Nassim Taleb*¶†
June
- How Git Works, Julia Evans*‡
- The Programmer’s Brain, Felienne Hermans*†
- The Dagger and the Coin - 02 - The King’s Blood, Daniel Abraham*
- The Dagger and the Coin - 03 - The Tyrant’s Law, Daniel Abraham*
- The Dagger and the Coin - 04 - The Widow’s House, Daniel Abraham*
- The Dagger and the Coin - 05 - The Spider’s War, Daniel Abraham*
- The Tiger, John Vaillant*†
- Troll Mountain, Matthew Reilly*
- The Phoenix Project, Gene Kim, Kevin Behr & George Spafford*†
- Sherlock & Co. Podcast, Episodes 36–41*#
- Empire Podcast, America’s Founding Fathers & the American Revolution, Episodes 146–159 *†#
- A Good Girl’s Guide to Murder, Holly Jackson
- 90 Common French Phrases Every French Learner Should Know, Frederic Bibard‡
- How to Hide an Empire, Daniel Immerwahr*†
- How to Fail Miserably at Writing, Giselle Renard†
- The Colour of Magic, Terry Pratchett*
- Tarzan of the Apes, Edgar Rice Burroughs*
- The Return of Tarzan, Edgar Rice Burroughs*
- The Beasts of Tarzan, Edgar Rice Burroughs*
- The Girl Who Would Be Free, Ryan Holiday*
- The Rest is History, Titanic, Episodes 427–432*†
- Empire Podcast, American Imperialism, Episodes 160–169 *†#
July
- HIM, Geoff Ryman
- Mr. Einstein’s Secretary, Matthew Reilly
- The Peripheral, William Gibson
- The Rest is History, Fall of the American Indigenous Peoples, Episodes 446–456*†#
August
- Podman in Action, Daniel Walsh*‡
- Painless Tmux, Nate Dickson*‡
- Hardcore History, Mania for Subjugation, Episode 71*†
- The Rest is History, The Road to The Great War, Episodes 465–474*†
- The Steerswoman, Rosemary Kirstein*
- The Outskirter’s Secret, Rosemary Kirstein*
- The Lost Steersman, Rosemary Kirstein*
- The Language of Power, Rosemary Kirstein*
- Guns & Thighs, Ram Gopal Varma
- Business Maharajas, Gita Piramal*†
- Complete Digital Photography, Ben Long*‡
- The Rest is History, The French Revolution, Episodes 475–482*†
- Milestones: The Story of Wordpress, The Wordpress Team†
- Aurelia, Minerva Spencer*
- Empire Podcast, The Bengal Famine of 1942, Episodes 170–171 *†#
- Three Million, The Bengal Famine of 1942, Episodes 0-7 *†#
- A Christmas Gone Perfectly Wrong, Cecilia Grant
- A Lady Awakened, Cecilia Grant
- A Gentleman Undone, Cecilia Grant
- A Woman Entangled, Cecilia Grant
- The Cabinet of Dr. Leng, Douglas Preston & Linconln Child
- Angel of Vengeance, Douglas Preston & Lincoln Child
September
- Empire Podcast, American Imperialism, Episodes 172–178*†#
- Empire Podcast, The Indosphere, Episodes 179–185*†#
- Powerful Command-Line Applications in Go, Ricardo Gerardi*‡
- The Rest is History, The Hundred Years’ War Part I, Episodes 485–490*†
- Empire Podcast, Scotland & Empire, Episodes 186–189*†#
October
- Head First Go, Jay McGavren*‡
- The Rest is History, The Roman Conquest of Britain, Episodes 499-502*†
- All Systems Red, Martha Wells*
- Artificial Condition, Martha Wells*
- Rogue Protocol, Martha Wells*
- Exit Strategy, Martha Wells*
- Network Effect, Martha Wells*
- Fugitive Telemetry, Martha Wells*
- System Collapse, Martha Wells*
- Grit, Angela Duckworth†
- The Art of Resilience, Ross Edgley*†
Fixing Espanso Expansions
Jason Braganza
I never had the time to deal with my Espanso1 hijinks until today.
While it worked perfectly, when I installed it all those years ago, when I migrated over from the Mac, Espanso itself has changed and evolved over the years.
It took over my old configuration like a champ and mostly worked, with the exception of a few shortcuts; ones that I frequently used 😂
Emacs was one application of mine that never quite worked right with Espanso.
I’d frequently get timed out waiting for reply from selection owner
whenever I tried expansions in there. Typing :joy to get 😂 would work in every other program, but no joy with Emacs, in additon to plenty of other expansions err … not expanding.
All my browser url expansions would not expand properly either, with mangled expansions most of the time.
So today I dove in to the docs, and realised two things.
1. My emacs needed a longer time out
2. Espanso now tries to identify the kind of text, and maybe those were causing my issues?
Emacs
I realised I needed a longer clipboard threshold, only for Emacs.
So I created an App specific configuration, just for Emacs to use and gave it said option. Here’s what the contents of my espanso/config/emacs.yml
look like
filter_class: 'Emacs'
clipboard_threshold: 10000
Rich Text Expansions
That helped with a lot of expansions in Emacs, but not with my joy expansion.
And not with stuff that were links and oh … links! and html! and markdown! Could those be the culprits?2
The docs mention that Espanso now has rich text support
What that means, is that the trigger nows supports two new keywords html
and markdown
in addition ye ole replace
So I changed most of my affected shortcuts to either of those two keywords; markdown
for most everything and html
for linky stuff. Here’s what my beloved joy looks like now …
matches:
- trigger: ":joy"
markdown: 😂
And those two things did it! Every shortcut expands everwhere! What joy 😂
Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.
P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.
Exploring the World of Tiling Window Managers: A Journey to Boost Productivity
Farhaan BukhshForgejo
Jason BraganzaSetup a new instance of Forgejo for myself today.
The first thing I felt, was … instant relief.
I didn’t realise just how much cognitive discomfort I was feeling, because Github was the only place I had all my code. As well as mirrors/forks of all the stuff I loved.
I’m going to slowly move over to my little Forgejo instance as I learn more.
Right now I’ve set up pull mirrors for all the projects I love.
Next step is to figure out how to automate my Hugo blog deployment and more importantly; having all my posts in source control.
Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.
P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.
Smol Note to Self, on Deploying Stuff
Jason Braganza- Anything I want folks to see and/or interact with, goes on the public VM
- Anything I want to host for myself, goes on the Pi.
Feedback on this post?
Mail me at feedback at this domain or continue the discourse here.
P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.
Updated blog theme after many years
Kushal DasOne of the major reason of using static blogging for me is to less worry about how the site will look like. Instead the focus was to just write (which of course I did not do well this year). I did not change my blog's theme for many many years.
But, I noticed Oskar Wickström created a monospace based site and kindly released it under MIT license. I liked the theme, so decided to start using it. I still don't know HTML/CSS but managed to change the template for my website.
You can let me know over mastodon what do you think :)
anki-push-u, Creating a Tiny Pushover Addon for Anki
Jason BraganzaI want to slowly increase my French vocabulary, so I got this comprehensive frequency word deck from Shared Decks section of the Anki website.
I keep forgetting to look it up during the day, after my morning session and the only way I can get those stubborn words1 to stick in my mind, is if I keep doing the deck 4–5 times a day.
I know! With all my newfound devops/python skills, could I figure out a way to remind myself to do it? Turns out I can! :)
I already use Pushover, to get notified of darn near anything.2
And I read enough of Anki’s add-on documentation to know I could whip something up.
So a bit of searching on the net, a bit of jiggery pokery with Claude, and some spelunking through Anki’s source code and forums later, I present to you … anki-push-u!3
As long as your Anki’s running, this little add-on will find cards due, at the interval you tell it to, and then notify you wherever you have Pushover running!
No more forgetting due cards!
Find the add-on and instructions over at Github.
Feedback on this post? Mail me at feedback at this domain
P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.
20 years of this blog
Kushal DasI started writing blog 20 years ago, not on this domain, but this blog still has all the old posts starting from 8th August 2004. Though I used to write mostly one line blog posts, which is equivalent of Mastodon posts these days.
I started writing another blog, but in Swedish. So that I can feel less scared with the language.
Tools used
- Started on blogspot in 2004
- Moved to Wordpress in 2007
- Moved to Nikola, first time for me in static blogging system in 2012.
- Moved to Shonku my Golang based static blogging tool in 2013.
- Moved to khata moved to my Rust based blogging tool in 2019.
Hopefully I will write more in the coming months. But, who knows :)
Multi-factor authentication in django
Kushal DasMulti-factor authentication is a must have feature in any modern web application. Specially providing support for both TOTP (think applications on phone) and FIDO2 (say Yubikeys) usage. I created a small Django demo mfaforgood which shows how to enable both.
I am using django-mfa3 for all the hard work, but specially from a PR branch from my friend Giuseppe De Marco.
I also fetched the cbor-js package in the repository so that hardware tokens for FIDO2 to work. I hope this example will help you add the MFA support to your Django application.
Major points of the code
- Adding example templates from MFA project, with
admin
theme and addingcbor-js
to the required templates. - Adding
mfa
toINSTALLED_APPS
. - Adding
mfa.middleware.MfaSessionMiddleware
toMIDDLEWARE
. - Adding
MFA_DOMAIN
andMFA_SITE_TITLE
tosettings.py
. - Also adding
STATICFILES_DIRS
. - Adding
mfa.views.MFAListView
as the Index view of the application. - Also adding
mfa
URLs.
After login for the first time one can enable MFA in the following screen.
Looking back to Euro Python 2024
Anwesha DasOver the years, when I am low, I always go to the 2014 Euro Python talk "Farewell and Welcome Home: Python in Two Genders" by Naomi. It has become the first step of my coping mechanism and the door to my safe house. Though 2024 marked my Euro Python journey in person, I had a long connection and respect for the conference. A conference that believes community matters, human values and feelings matter, and not afraid to walk the talk. And how the conference stood up to my expectations in every bit.
My Talk: Intellectual Property Law 101
I had my talk on Intellectual Property Law, on the first day. After a long time, I was giving a talk on the legal topic. This talk was dedicated to the developers. So, I concentrated on only those issues which concerned the developers. Tried to stitch the concerned topics Patent, Trademarks, and Copyright together. For the smooth flow of the talk, since it becomes easier for the developers to understand and remember for all the practical purposes for future use. I was concerned if I would be able to connect with people. Later, people came to me with several related questions, starting from
-
Why should I be concerned about patents?
-
Which license would fit my project?
-
Should I be scared about any Trademarks granted to other organizations under some other jurisdiction?
So on and so forth. Though I could not finish the whole talk due to time constraints, I am happy with the overall review.
Panel: Open Source Sustainability
On Day 1 of the main conference, we had the panel on Open Source Sustainability. This topic lies at the core of open-source ecosystem sustainability for the projects and community for the future and stability. The panel had Deb Nicholson, Armin Ronacher Çağıl Uluşahin Sönmez,Deb Nicholson, Samuel Colvin, and me and Artur Czepiel as the moderator. I was happy to represent my community&aposs side. It was a good discussion, and hopefully, we could give answers to some questions of the community in general.
Birds of Feather session: Open Source Release Management
This Birds of Feathers (BoF) session is intended to deal with the Release Management of various Open Source projects, irrespective of their size. The discussion includes all projects, from a community-led project to projects maintained/initiated by big enterprises, from a project maintained by one contributor to a project with several hundred contributors.
-
What methods do we follow regarding versioning, release cadence, and the process?
-
Do most of us follow manual processes or depend on automated ones?
-
What works and what does not, and how can we improve our lives?
-
What are the significant points that make the difference?
We discussed and covered the following topics: different aspects of release management of Open-Source projects, security, automation, CI usage, and documentation. We followed the Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.
PyLadies Lunch
And then comes my favorite part of the conference: PyLadies Lunch. It was my seventh PyLadies lunch, and I was moderating it for the fifth time. But this time, my wonderful friends [Laís] and Çağıl were by my side, holding me up when I failed. I love every time I am at a PyLadies lunch. This is where I get my strength, energy, and love.
Workshop
I attended two workshops organized by Anezka Muller , Mia Bajić and all amazing PyLadies organizers
-
Self-defense workshop where the moderators helped us navigate challenging situations we face in life, safeguard ourselves from them, and overcome them.
-
I AM Remarkable workshop, where we learned to tell people about our successes.
Representing Ansible Community
I always take the chance to meet the Ansible community members face-to-face. Euro Python gave me another opportunity to do that. I learned about different user stories that we do not get to hear from our work corners, and I learned about these unique problems and their solutions in Ansible.
Fun fact : Maarten gave a review after knowing I am Anwesha from the Ansible project. He said, &aposCan you Ansible people slow down in releasing new versions of Ansible? Every time we get used to it, we have a new version.&apos
Acknowledging mental health issues
The proudest moment for me personally was when I acknowledged my mental health issues and later when people came to me saying how they relate to me and how they felt empowered when I mentioned this.
PyLadies network at Red Hat
A network of PyLadies within Red Hat has been my dream since I joined Red Hat. She also agreed when I shared this with Karolina at last year&aposs DevConf. And finally, we initiated on day 2 of the conference. We are so excited for the future to come.
Meeting friends
Conference means friends. It was so great to meet so many friends after such a long time Tylor, Nicholas, Naomi, Honza, Carol, Mike, Artur, Nikita, Valerio and many new ones Jannis Joana,[Chirstian], Martina Tereza , Maria, Alyona, Mia, Naa , Bojanand Jodie. A special note of love to Jodie, you to hold my hand and take me out of the dark.
The best is saved for the last. Euro Python 2024 made 3 of my dreams come true.
-
Gender Neutral Washrooms
-
Sanitary products in restrooms (I remember carrying sanitary napkins in my bag pack in PyCon India and telling girls if they needed it, it was available in the PyLadies booth).
-
Neo-diversity bag (which saved me at the conference; thank you, Karolina, for this)
I cannot wait for the next Euro Python; see you all at Euro Python 2025.
PS: Thanks to Lias, I will always have a small piece of Euro Python 2024 with me. I know I am loved and cared for.
Euro Python 2024
Anwesha DasIt is July, and it is time for Euro Python, and 2024 is my first Euro Python. Some busy days are on the way. Like every other conference, I have my diary, and the conference days are full of various activities.
Day 0 of the main conference
After a long time, I will give a legal talk. We are going to dig into some basics of Intellectual Property. What is it? Why do we need it? What are the different kinds of intellectual property? It is a legal talk designed for developers. So, anyone and everyone from the community with previous knowledge can understand the content and use it to understand their fundamental rights and duties as developers.Intellectual Property 101, the talk is scheduled at 11:35 hrs.
Day 1 of the main conference
Day 1 is PyLadies Day, a day dedicated to PyLadies. We have crafted the day with several different kinds of events. The day opens with a self-defense workshop at 10:30 hrs. PyLadies, throughout the world, aims to provide and foster a safe space for women and friends in the Python Community. This workshop is an extension of that goal. We will learn how to deal with challenging, inappropriate behavior.
In the community, at work, or in any social space. We will have a trained Psychologist as a session guide to help us. This workshop is so important, especially today as it was yesterday and may be in the future (at least until the enforcement of CoC is clear). I am so looking forward to the workshop. Thank you, Mia, Lias and all the PyLadies for organizing this and giving shape to my long-cherished dream.
Then we have my favorite part of the conference, PyLadies Lunch. I crafted the afternoon with a little introduction session, shout-out session, food, fun, laughter, and friends.
After the PyLadies Lunch, I have my only non-PyLadies session, which is a panel discussion on Open Source Sustainability. We will discuss the different aspects of sustainability in the open source space and community.
Again, it is PyLady&aposs time. Here, we have two sessions.
[IAmRemarkable](https://ep2024.europython.eu/pyladies-events#iamremarkable), to help you learn to empower you by celebrating your achievements and to fight your impostor syndrome. The workshop will help you celebrate your accomplishments and improve your self-promotion skills.
The second session is a 1:1 mentoring event, Meet & Greet with PyLadies. Here, the willing PyLadies will be able to mentor and be mentored. They can be coached in different subjects, starting with programming, learning, things related to job and/or career, etc.
Birds of feather session on Release Management of Open Source projects
It is an open discussion related to the release Management of the Open Source ecosystem.
The discussion includes everything from a community-led project to projects maintained/initiated by a big enterprise, a project maintained by one contributor to a project with several hundreds of contributor bases. What are the different methods we follow regarding versioning, release cadence, and the process itself? Do most of us follow manual processes or depend on automated ones? What works and what does not, and how can we improve our lives? What are the significant points that make the difference? We will discuss and cover the following topics: release management of open source projects, security, automation, CI usage, and documentation. In the discussion, I will share my release automation journey with Ansible. We will follow Chatham House Rules during the discussion to provide the space for open, frank, and collaborative conversation.
So, here comes the days of code, collaboration, and community. See you all there.
PS: I miss my little Py-Lady volunteering at the booth.
Event Driven Ansible, what, why and how?
Anwesha DasAnsible Playbooks is the known term, now there is a new term which is being floted in the project, which is Ansible Rulebooks. Today we are going to discuss about Ansible&aposs journey from Playbook to Rulebook rather Playbook with Rulebook.
What is Event Driven Ansible?
What is Event Driven Ansible? In simple terms, some action is triggered by some events. The idea of EDA comes from Event driven architecture. Event driven ansible runs code automatically based on received event notifications.
Some important terms:
What is event in Event Driven Ansible?
The event is the notification of a certain incident.
Where do we get the events from?
We get the events from event sources. Ansible EDA provides different pulgins to support various event sources. There are several event source plugins such as :
url_check (checking the http status code), webhook (providing and checking events from webhook), journald (monitoring the journald logs) and the list goes on.
When to take actions?
Rulebook defines conditions and actions in case of fulfilling those actions. Conditions use operators as strings, boolean and numerical data. And actions are occurrence of events once the conditions are met. Running a playbook, setting a fact, running a module etc.
Small example Project
Here is a small example of Event Driven Ansible and how it is run. The idea is on receiving of a message (here the number 42) a playbook will run in the host. There are the following 3 files :
demo_rule.yml
---
- name: Listen for events on a webhook
hosts: all
sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 8000
rules:
- name: Say thank you
condition: event.payload.message == "42"
action:
run_playbook:
name: demo.yml
This is the rulebook. We are using the webhook
plugin here as the event source. As a rule in the event of receiving the message 42
as json payload in the webhook, we run the playbook called demo.yml
demo.yml
- hosts: localhost
connection: local
tasks:
- debug:
msg: "Thank you for the answer."
demo.yml
, the playbook which run on the occurrence of the event mentioned in the rulebook and prints a debug message.
---
local:
hosts:
localhost
inventory.yml
mentions the hosts to run the action against.
Further there are 2 files to one to test 42.json
and 43.json
to test the code.
{
"message" : "42"
}
{
"message" : "43"
}
First we have to install all related dependencies before we can run the rulebook.
$ python -m venv .venv
$ source .venv/bin/activate
$ python -m pip install ansible ansible-rulebook ansible-runner psycopg
$ ansible-galaxy collection install ansible.eda
$ ansible-rulebook --rulebook demo_rule.yml -i inventory.yml --verbose
Go to another terminal and on the same directory path and run the following command to test the Rulebook. After receiving the message, the playbook runs.
curl -X POST -H "Content-Type: application/json" -d @42.json 127.0.0.1:8000/endpoint
Output
2024-06-07 16:48:53,868 - ansible_rulebook.app - INFO - Starting sources
2024-06-07 16:48:53,868 - ansible_rulebook.app - INFO - Starting rules
...
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "Thank you for the answer."
}
PLAY RECAP *********************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
2024-06-07 16:50:08,224 - ansible_rulebook.action.runner - INFO - Ansible runner Queue task cancelled
2024-06-07 16:50:08,225 - ansible_rulebook.action.run_playbook - INFO - Ansible runner rc: 0, status: successful
Now if we run the other json file 43.json
we see that the playbook does not run even after the http status code
being 200
.
curl -X POST -H "Content-Type: application/json" -d @43.json 127.0.0.1:8000/endpoint
Output :
2024-06-07 18:20:37,633 - aiohttp.access - INFO - 127.0.0.1 [07/Jun/2024:17:20:37 +0100] "POST /endpoint HTTP/1.1" 200 159 "-" "curl/8.2.1"
You can try this yourself follwoing this git repository.
A Tragic Collision: Lessons from the Pune Porsche Accident
Shivam SoniSSL: How It Works and Why It Matters
Farhaan BukhshTest container image with eercheck
Anwesha DasExecution Environments serves us the benefits of containerization by solving the issues such as software dependencies, portability. Ansible Execution Environment are Ansible control nodes packaged as container images. There are two kinds of Ansible execution environments
-
Base, includes the following
- fedora base image
- ansible core
- ansible collections : The following set of collections
ansible.posix
ansible.utils
ansible.windows
-
Minimal, includes the following
- fedora base image
- ansible core
I have been the release manager for Ansible Execution Environments. After building the images I perform certain steps of tests to check if the versions of different components of the newly built correct or not. So I wrote eercheck to ease the steps of tests.
What is eercheck
?
eercheck is a command line tool to test Ansible community execution environment before release. It uses podman py to connect and work with the podman container image, and Python unittest for testing the containers.
eercheck
is a command line tool to test Ansible Community Execution Environment before release. It uses podman-py to connect and work with the podman container image, and Python unittest for testing the containers. The project is licensed under GPL-3.0-or-later.
How to use eercheck
?
Activate the virtual environment in the working directory.
python3 -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements.txt
Activate the podman
socket.
systemctl start podman.socket --user
Update vars.json
with correct version numbers.Pick the correct versions of the Ansible Collections from the .deps
file of the corresponding Ansible community package release. For example for 9.4.0 the Collection versions can be found in here. You can find the appropriate version of Ansible Community Package here. The check needs to be carried out each time before the release of the Ansible Community Execution Environment.
Execute the program by giving the correct container image id.
./containertest.py image_id
Happy automating.
Opening up Ansible release to the community
Anwesha DasTransparency, collaboration, inclusivity, and openness lay the foundation of the Open Source community. As the project&aposs maintainers, few of our tasks make the entry bar of contribution low, collaboration easy, and the governance model fair. Ansible Community Engineering Team always thrives on these purposes through our different endeavors.
Ansible has historically been released by Red Hat employees. We planned to open up the release to the community. And I was asked about that. My primary goal was releasing Ansible, which should be dull and possible for the community. This was my first time dealing with Github actions. There is still a lot to learn. But we are there now.
The Release Management working group started releasing the Ansible Community package using GitHub Actions workflow from Ansible version 9.3.0 . The recent 9.4.0 release has also been released following the same workflow.
Thank you Felix Fontein, Maxwell G, Sviatoslav Sydorenko and Toshio for helping out in shaping the workflow with you valuable feedback, doing the actual release and giving answers to my enumerable queries.
Making my first OnionShare release
Saptak SenguptaOne of the biggest bottlenecks in maintaining the OnionShare desktop application has been packaging and releasing the tool. Since OnionShare is a cross platform tool, we need to ensure that release works in most different desktop Operating Systems. To know more about the pain that goes through in making an OnionShare release, read the blogs[1][2][3] that Micah Lee wrote on this topic.
However, one other big bottleneck in our release process apart from all the technical difficulties is that Micah has always been the one making the releases, and even though the other maintainers are aware of process, we have never actually made a release. Hence, to mitigate that, we decided that I will be making the OnionShare 2.6.1 release.
PS: Since Micah has written pretty detailed blogs with code snippets, I am not going to include much code snippets (unless I made significant changes) to not lengthen this already long code further. I am going to keep this blog more like a narrative of my experience.
Getting the hardwares ready
Firstly, given the threat model of OnionShare, we decided that it is always good to have a clean machine to do the OnionShare release works, especially the signing part of things. Micah has already automated a lot of the release process using GitHub Actions over the years, but we still need to build the Apple Silicon versions of OnionShare manually and then merge with the Intel version to create a univeral2 app bundle.
Also, in general, it's a good practise to have and use the signing keys in a clean machine for a projective as sensitive as OnionShare that is used by people with high threat models. So I decided to get a new Macbook for the same. This would help me build the Apple Silicon version as well as sign the packages for the other Operating Systems.
Also, I received the HARICA signing keys from Glenn Sorrentino that is needed for signing the Windows releases.
Fixing the bugs, merging the PRs
After the 2.6.1-dev release was created, we noticed some bugs that we wanted to fix before making the 2.6.1. We fixed, reviewed and merged most of those bugs. Also, there were few older PRs and documentation changes from contributors that I wanted to be merged before making the release.
Translations
Localization is an important part of OnionShare since it enables users to use OnionShare in the language they are most comfortable with. There were quite some translation PRs. Also, emmapeel2 who always helps us with weblate wizardry, made certain changes in the setup, which I also wanted to include in this release.
After creating the release PR, I also need to check which languages are greater than 90% translated, and make a push to hopefully making some more languages pass that threshold, and finally make the OnionShare release with only the languages that cross that threshold.
Making the Release PR
And, then I started making the release PR. I was almost sure that since Micah had just made a dev release, most things would go smoothly. But my big mistake was not learning from the pain in Micah's blog.
Updating dependencies in Snapcraft
Updating the poetry dependencies went pretty smoothly.
There was nothing much to update in the pluggable transport scripts as well.
But then I started updating and packaging for Snapcraft and Flatpak. Updating tor versions to the latest went pretty smoothly. In snapcraft, the python dependencies needed to be compared manually with the pyproject.toml
. I definitely feel like we should automate this process in future, but for now, it wasn't too bad.
But trying to build snap with snapcraft
locally just was not working for me in my system. I kept getting lxd
errors that I was not fully sure what to do about. I decided to move ahead with flatpak packaging and wait to discuss the snapcraft issue with Micah later. I was satisfied that at least it was building through GitHub Actions.
Updating dependencies in Flatpak
Even though I read about the hardship that Micah had to go through with updating pluggable transports and python dependencies in flatpak packaging, I didn't learn my lesson. I decided, let's give it a try. I tried updating the pluggable transports and faced the same issue that Micah did. I tried modifying the tool, even manually updating the commits, but something or the other failed.
Then, I moved on to updating the python dependencies for flatpak. The generator code that Micah wrote for desktops worked perfectly, but the cli gave me pain. The format in which the dependencies were getting generated and the existing formats were not matching. And I didn't want to be too brave and change the format, since flatpak isn't my area of expertise. But, python kind of is. So I decided to check if I can update the flatpak-poetry-generator.py
files to work. And I managed to fix that!
That helped me update the dependencies in flatpak.
MacOS and Windows Signing fun!
Creating Apple Silicon app bundle
As mentioned before, we still need to create an Apple Silicon bundle and then merge it with the Intel build generated from CI to get the universal2 app bundle. Before doing that, need to install the poetry dependencies, tor dependencies and the pluggable transport dependencies.
And I hit an issue again: our get-tor.py script is not working.
The script failed to verify the Tor Browser version that we were downloading. This has happened before, and I kind of doubted that Tor PGP script must have expired. I tried verifying manually and seems like that was the case. The subkey used for signing had expired. So I downloaded the new Tor Browser Developers signing keys, created a PR, and seems like I could download tor now.
Once that was done, I just needed to run:
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./setup-freeze.py bdist_mac
rm -rf build/OnionShare.app/Contents/Resources/lib
mv build/exe.macosx-10.9-universal2-3.11/lib build/OnionShare.app/Contents/Resources/
/Library/Frameworks/Python.framework/Versions/3.11/bin/poetry run python ./scripts/build-macos.py cleanup-build
And amazingly, it built successfully in the very first try! That was easy! Now I just need to merge the Intel app bundle and the Silicon app bundle and everything should work (Spoiler alert: It doesn't!).
Once the app bundle was created, it was time to sign and notarize. However, the process was a little difficult for me to do since Micah had previously used an individual account. So I passed on the universal2 bundle to him and moved on to signing work in Windows.
Signing the Windows package
I had to boot into my Windows 11 VM to finish the signing and making the windows release. Since this was the first time I was doing the release, I had to first get my VM ready by installing all the dependencies needed for signing and packaging. I am not super familiar with Windows development environment so had to figure out adding PATH and other such things to make all the dependencies work. The next thing to do was setting up the HARICA smart card.
Setting up the HARICA smart card
Thankfully, Micah had already done this before so he was able to help me out a bit. I had to log into the control panel, download and import certificates to my smart card and change the token password and administrator password for my smart card. Apart from the UI of the SafeNet client not being the best, everything else went mostly smoothly.
Since Micah had already made some changes to fix the code signing and packaging stuff, it went pretty smooth for me and I didn't face much obstructions. Science & Design, founded by Glenn Sorrentino (who designed the beautiful OnionShare UX!), has taken on the role of fiscal sponsor for OnionShare and hence the package now gets signed under the name of Science and Design Inc.
Meanwhile, Micah had got back to me saying that the universal2 bundle didn't work.
So, the Apple Silicon bundle didn't work
One of the mistakes that I made was I didn't test my Apple Silicon build. I thought I will test it once it is signed and notarized. However, Micah confirmed that even after signing and notarizing, the universal2 build is not working. It kept giving segmentation fault
. Time to get back to debugging.
Downgrading cx-freeze to 6.15.9
The first thought that came to my mind was, Micah had made a dev build in October 2023. So the cx-freeze release from that time should still be building correctly. So I decided to try and do build
(instead of bdist_mac
) with the cx-freeze version at that time (which was 6.15.9
) and check if the binary created works. And thankfully, that did work. I tried with 6.15.10
and it didn't. So I decided to stick to 6.15.9
.
So let's try now running bdist_mac
, create a .app
bundle and hopefully everything will work perfectly! But nope! The command failed with:
OnionShare.app/Contents/MacOS/frozen_application_license.txt: No such file or directory
So now I had a decision to make, should I try to monkey-patch this and just figure out how to fix this or try to make the latest cx-freeze work. I decided to give the latest cx-freeze (version 6.15.15
) another try.
Trying zip_include_packages
So, one thing I noticed we were doing differently than what cx-freeze documentation and examples for PySide6 mentioned was we put our dependencies in packages
, instead of zip_include_packages
in the setup options.
"build_exe": {
"packages": [
"cffi",
"engineio",
"engineio.async_drivers.gevent",
"engineio.async_drivers.gevent_uwsgi",
"gevent",
"jinja2.ext",
"onionshare",
"onionshare_cli",
"PySide6",
"PySide6.QtCore",
"PySide6.QtGui",
"PySide6.QtWidgets",
],
"excludes": [
"test",
"tkinter",
...
],
...
}
So I thought, let's try moving all of the depencies into zip_include_packages
from packages
. Basically zip_include_packages
includes the dependencies in the zip file, whereas packages
place them in the file system and not the zip file. My guess was, the Apple Silicon configuration of how a .app
bundle should be structured has changed. So the new options looked something like this:
"build_exe": {
"zip_include_packages": [
"cffi",
"engineio",
"engineio.async_drivers.gevent",
"engineio.async_drivers.gevent_uwsgi",
"gevent",
"jinja2.ext",
"onionshare",
"onionshare_cli",
"PySide6",
"PySide6.QtCore",
"PySide6.QtGui",
"PySide6.QtWidgets",
],
"excludes": [
"test",
"tkinter",
...
],
...
}
So I created a build using that, ran the binary, and it gave an error. But I was happy, because it wasn't segmentation fault
. The error mainly because it was not able to import some functions from onionshare_cli
. So as a next step, I decided to move everything apart from onionshare
and onionshare_cli
to zip_include_packages
. It looked something like this:
"build_exe": {
"packages": [
"onionshare",
"onionshare_cli",
],
"zip_include_packages": [
"cffi",
"engineio",
"engineio.async_drivers.gevent",
"engineio.async_drivers.gevent_uwsgi",
"gevent",
"jinja2.ext",
"PySide6",
"PySide6.QtCore",
"PySide6.QtGui",
"PySide6.QtWidgets",
],
"excludes": [
"test",
"tkinter",
...
],
...
}
This almost worked. Problem was, PySide 6.4 had changed how they deal with ENUMs and we were still using deprecated code. Now, fixing the deprecations would take a lot of time, so I decided to create an issue for the same and decided to deal with it after the release.
At this point, I was pretty frustrated, so I decided to do, what I didn't want to do. Just have both packages
and zip_include_packages
. So I did that, build the binary and it worked. I decided to make the .app
bundle. It worked perfectly as well! Great!
I was a little worried that adding the dependencies in both packages
and zip_include_packages
might increase the size of the bundle, but surprisingle, it actually decreased the size compared to the dev build. So that's nice! I also realized that I don't need to replace the lib
directory inside the .app
bundle anymore. I ran the cleanup code, hit some FileNotFoundError
, tried to find if the files were now in a different location, couldn't find them, decided to put them in a try-except
block.
After that, I merged the silicon bundle with Intel bundle to create the universal2 bundle again, sent to Micah for signing, and seems like everything worked!
Creating PGP signature for all the builds
Now that we had all the build files ready, I tried installing and running them all, and seems like everything is working fine. Next, I needed to generate PGP signature for each of the build files and then create a GitHub release. However, Micah is the one who has always created signatures. So the options for us now were:
- create an OnionShare GPG key that everyone uses
- sign with my GPG and update documentations to reflect the same
The issue with creating a new OnionShare GPG key was distribution. The maintainers of OnionShare are spread across timezones and continents. So we decided to create signature with my GPG and update the documentation on how to verify the downloads.
Concluding the release
Once the signatures were done, the next steps were mostly straightforward:
- Create a GitHub release
- Publish onionshare-cli on PyPi
- Push the build and signatures to the onionshare.org servers and update the website and docs
- Create PRs in Flathub and Homebrew cask
- Make the snapcraft edge to stable
The above went pretty smooth without much difficulty. Once everything was merged, it was time to make an announcement. Since Micah has been doing the announcements, we decided to stick with that for the release so that it reaches to more people.
New Blog
Abhilash RajThis is beginning of my new blog! While https://blog.araj.me was previously running on Ghost as well, this is a new install. Primarily, because i couldn&apost easily get the data back from my previous ghost install. It still lives in a mysql instance, so old posts might appear on this instance too if I feel like it at some point.
What am I going to write about? I&aposve been working a lot on my homelab setup so that is probably going to be the starting point. I have also been trying out OpenWRT for my router (running on an Edgerouter X, who could&aposve thought it can run with 95% space available and over 65% free memory) and struggling to re-configure VLANs to segregate my homelab, "regular internet" for my wife and guests and IoT stuff. Setting up VLANs on OpenWRT was not fun, I took down the internet a couple of times, which wasn&apost appreciated at home. So, I ended up flashing another old TP-Link router I had to learn OpenWRT so I try out settings there before I apply it to main router.
My Homelab currently runs on an Intel NUC 10 i7 (6C12T, 16G RAM), which has been plenty for my current use cases. I&aposve over-provisioned it with Proxmox VE as the hypervisor of choice. I am using an actual hypervisor based setup for the first time and there is no going back now! For some reason, I tried out Xcp-ng as well but with XOA, I couldn&apost figure out how to do some stuff, so that setup is currently turned off. Maybe I&aposll dust it off again at some point. I do have 2 more nodes in the standby to run more things, but that&aposll probably happen once I shift to my new house (hopefully soon!).
Year End Review - 2023
Farhaan Bukhshgit fixup -- your workflow
Farhaan BukhshSafeguarding Our Digital Lives: As Prevention is Better than the Cure
Shivam SoniUpgrading Kubernetes Cluster
Priyanka SagguJune 08, 2023
Disclaimer:
Just trying to document the process (strictly) for me.
This documentation is just for educational purpose.
The process won’t follow for any production cluster!
Aim
To upgrade a Kubernetes cluster with nodes running Kubernetes Version v1.26.4 to v1.27.2
I’m using a Kubernetes cluster created using Kind, for the example sake.
[STEP 1] Create a kind Kubernetes cluster
Use the following kind-config.yaml
file:
# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.26.4@sha256:f4c0d87be03d6bea69f5e5dc0adb678bb498a190ee5c38422bf751541cebe92e
- role: worker
image: kindest/node:v1.26.4@sha256:f4c0d87be03d6bea69f5e5dc0adb678bb498a190ee5c38422bf751541cebe92e
Please note:
- The above config file for Kind cluster will create a Kubernetes cluster with 2 nodes:
- Control Plane Node (name: kind-control-plane) Kubernetes Version: v1.26.4
- Worker Node (name: kind-worker) Kubernetes Version: v1.26.4
Run the following command to create the cluster:
$ kind create cluster --config kind-config.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.26.4) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
Verify if cluster came up successfully:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 2m43s v1.26.4 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
kind-worker Ready <none> 2m25s v1.26.4 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
Note that the version of both nodes is currently v1.26.4
[STEP 2] Upgrade the control plane node
Exec inside the docker container corresponding to the control plane node (kind-control-plane):
$ docker exec -it kind-control-plane bash
root@kind-control-plane:/#
Install the utility packages:
root@kind-control-plane:/# apt-get update && apt-get install -y apt-transport-https curl gnupg
root@kind-control-plane:/# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kind-control-plane:/# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kind-control-plane:/# apt-get update
Check which version to upgrade to (in our case, we’re checking if v1.27.2 is available)
root@kind-control-plane:/# apt-cache madison kubeadm
kubeadm | 1.27.2-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.1-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.27.0-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.5-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
kubeadm | 1.26.4-00 | http://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
...
Upgrade Kubeadm to the required version:
root@kind-control-plane:/# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.27.2-00 && apt-mark hold kubeadm
...
Setting up kubeadm (1.27.2-00) ...
Configuration file '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
==> File on system created by you or by a script.
==> File also in package provided by package maintainer.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** 10-kubeadm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ...
kubeadm set on hold.
...
root@kind-control-plane:/# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:18:49Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Check & Verify the Kubeadm upgrade plan:
root@kind-control-plane:/# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.26.4
[upgrade/versions] kubeadm version: v1.27.2
[upgrade/versions] Target version: v1.27.2
[upgrade/versions] Latest version in the v1.26 series: v1.26.5
W0608 12:57:04.800282 5535 compute.go:307] [upgrade/versions] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 1 x v1.26.4 v1.26.5
1 x v1.27.2 v1.26.5
Upgrade to the latest version in the v1.26 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.26.4 v1.26.5
kube-controller-manager v1.26.4 v1.26.5
kube-scheduler v1.26.4 v1.26.5
kube-proxy v1.26.4 v1.26.5
CoreDNS v1.9.3 v1.10.1
etcd 3.5.6-0 3.5.7-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.26.5
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 1 x v1.26.4 v1.27.2
1 x v1.27.2 v1.27.2
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.26.4 v1.27.2
kube-controller-manager v1.26.4 v1.27.2
kube-scheduler v1.26.4 v1.27.2
kube-proxy v1.26.4 v1.27.2
CoreDNS v1.9.3 v1.10.1
etcd 3.5.6-0 3.5.7-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.27.2
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
We will upgrade to the late stable version (v1.27.2):
root@kind-control-plane:/# kubeadm upgrade apply v1.27.2
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.27.2"
[upgrade/versions] Cluster version: v1.26.4
[upgrade/versions] kubeadm version: v1.27.2
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
W0608 12:59:23.499649 5571 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0608 13:00:07.900906 5571 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.7" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.27.2" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
W0608 13:00:48.303106 5571 staticpods.go:305] [upgrade/etcd] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0608 13:00:48.305410 5571 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests56128700"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-06-08-13-00-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2613181160/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.27.2". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
I’m skipping upgrading the CNI (I don’t have any additional CNI provider plugin, other than what defaults to Kind cluster ~ kindnet)
But if you need to check how kindnet is working, do following inside the control plane node:
root@kind-control-plane:/# crictl ps
...
5715f2f6e401c b0b1fa0f58c6e 8 minutes ago Running kindnet-cni 2 3d78434184edf kindnet-blltq
...
root@kind-control-plane:/# crictl logs 5715f2f6e401c
I0608 13:02:38.079089 1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:38.080550 1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:38.080592 1 main.go:93] apiserver not reachable, attempt 0 ... retrying
I0608 13:02:38.080600 1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:38.081047 1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:38.081072 1 main.go:93] apiserver not reachable, attempt 1 ... retrying
I0608 13:02:39.081260 1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:39.082375 1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:39.082405 1 main.go:93] apiserver not reachable, attempt 2 ... retrying
I0608 13:02:41.082727 1 main.go:316] probe TCP address kind-control-plane:6443
W0608 13:02:41.083924 1 main.go:332] REFUSED kind-control-plane:6443
I0608 13:02:41.083963 1 main.go:93] apiserver not reachable, attempt 3 ... retrying
I0608 13:02:44.085510 1 main.go:316] probe TCP address kind-control-plane:6443
I0608 13:02:44.088241 1 main.go:102] connected to apiserver: https://kind-control-plane:6443
I0608 13:02:44.088270 1 main.go:107] hostIP = 172.18.0.3
podIP = 172.18.0.3
I0608 13:02:44.088459 1 main.go:116] setting mtu 1500 for CNI
I0608 13:02:44.088536 1 main.go:146] kindnetd IP family: "ipv4"
I0608 13:02:44.088559 1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
I0608 13:02:44.278193 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]
I0608 13:02:44.278210 1 main.go:227] handling current node
I0608 13:02:44.280741 1 main.go:223] Handling node with IPs: map[172.18.0.2:{}]
I0608 13:02:44.280753 1 main.go:250] Node kind-worker has CIDR [10.244.1.0/24]
I0608 13:02:54.293198 1 main.go:223] Handling node with IPs: map[172.18.0.3:{}]
Now, before we go and upgrade the kubelet & kubectl (& restart the services),
Open a new terminal (outside the docker exec) and mark the node unschedulable (cordon) and then evict the workload (drain)
# Outside the docker exec terminal
$ kubectl drain kind-control-plane --ignore-daemonsets
node/kind-control-plane cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-blltq, kube-system/kube-proxy-rfbd5
evicting pod local-path-storage/local-path-provisioner-6bd6454576-xlvmc
pod/local-path-provisioner-6bd6454576-xlvmc evicted
node/kind-control-plane drained
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready,SchedulingDisabled control-plane 47m v1.27.2 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
kind-worker Ready <none> 47m v1.26.4 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
Now, come back to the former terminal with the docker exec (into control-plane node):
And upgrade the kubelet and kubectl:
root@kind-control-plane:/# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.27.2-00 kubectl=1.27.2-00 && apt-mark hold kubelet kubectl
kubelet was already not hold.
kubectl was already not hold.
Hit:2 http://deb.debian.org/debian bullseye InRelease
Hit:3 http://deb.debian.org/debian-security bullseye-security InRelease
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
kubectl is already the newest version (1.27.2-00).
kubectl set to manually installed.
kubelet is already the newest version (1.27.2-00).
kubelet set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
kubelet set on hold.
And now restart the kubelet:
root@kind-control-plane:/# systemctl daemon-reload
root@kind-control-plane:/# systemctl restart kubelet
And now go back to the other terminal outside the docker exec, and uncordon the node:
$ kubectl uncordon kind-control-plane
node/kind-control-plane uncordoned
And that’s everything for the control plane upgrade! Just check at last if it is running properly!
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 52m v1.27.2 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
kind-worker Ready <none> 51m v1.26.4 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
And don’t forget to exit from the docker exec terminal (kind-control-plane):
root@kind-control-plane:/# exit
exit
[STEP 3] Upgrade the worker node
Exec inside the docker container corresponding to the worker node (kind-worker):
$ docker exec -it kind-worker bash
root@kind-worker:/#
Install the utility packages:
root@kind-worker:/# apt-get update && apt-get install -y apt-transport-https curl gnupg
root@kind-worker:/# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
root@kind-worker:/# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
root@kind-worker:/# apt-get update
Upgrade Kubeadm to the required version:
root@kind-worker:/# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.27.2-00 && apt-mark hold kubeadm
...
Setting up kubeadm (1.27.2-00) ...
Configuration file '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
==> File on system created by you or by a script.
==> File also in package provided by package maintainer.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** 10-kubeadm.conf (Y/I/N/O/D/Z) [default=N] ? Y
Installing new version of config file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf ...
kubeadm set on hold.
Run kubeadm upgrade (For worker nodes this upgrades the local kubelet configuration):
root@kind-worker:/# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2909228160/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
Now, before we go and upgrade the kubelet & kubectl (& restart the services),
Open a new terminal (outside the docker exec of kind-worker container) and mark the node unschedulable (cordon) and then evict the workload (drain)
# Outside the docker exec terminal
$ kubectl drain kind-worker --ignore-daemonsets
node/kind-worker cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qpx8l, kube-system/kube-proxy-5xf5d
evicting pod local-path-storage/local-path-provisioner-6bd6454576-km824
evicting pod kube-system/coredns-5d78c9869d-mvgjq
evicting pod kube-system/coredns-5d78c9869d-zrmm4
pod/coredns-5d78c9869d-mvgjq evicted
pod/coredns-5d78c9869d-zrmm4 evicted
pod/local-path-provisioner-6bd6454576-km824 evicted
node/kind-worker drained
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 62m v1.27.2 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
kind-worker Ready,SchedulingDisabled <none> 61m v1.27.2 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
Now, come back to the former terminal with the docker exec (into kind-worker container):
And upgrade the kubelet and kubectl:
root@kind-control-plane:/# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.27.2-00 kubectl=1.27.2-00 && apt-mark hold kubelet kubectl
kubelet was already not hold.
kubectl was already not hold.
Hit:2 http://deb.debian.org/debian bullseye InRelease
Hit:3 http://deb.debian.org/debian-security bullseye-security InRelease
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease
Hit:1 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
kubectl is already the newest version (1.27.2-00).
kubectl set to manually installed.
kubelet is already the newest version (1.27.2-00).
kubelet set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
kubelet set on hold.
kubectl set on hold.
And now restart the kubelet:
root@kind-worker:/# systemctl daemon-reload
root@kind-worker:/# systemctl restart kubelet
And now go back to the other terminal outside the docker exec, and uncordon the node:
$ kubectl uncordon kind-worker
node/kind-worker uncordoned
And that’s everything for the worker node upgrade! Just check at last if it is running properly!
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane 67m v1.27.2 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
kind-worker Ready <none> 66m v1.27.2 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.15.0-73-generic containerd://1.6.21
And don’t forget to exit from the docker exec terminal (kind-worker):
root@kind-worker:/# exit
exit
With that both our nodes are now successfully upgraded from Kubernetes version v1.26.4 to v1.27.2
References:
CSS: Combinators
Sandeep ChoudharyIn CSS, combinators are used to select content by combining selectors in specific relationships. There are different types of relationships that can be used to combine selectors.
Descendant combinator
The descendant combinator is represented by a space “ ” and typically used between two selectors. It selects the second selector if the first selector is the ancestor (parent, parent parent's) element. These selectors are called the descendant selectors.
.cover p {
color: red;
}
<div class="cover"><p>Text in .cover</p></div>
<p>Text not in .cover</p>
In this example, the text “Text in .cover” will be displayed in red.
Child combinators
The child combinator is represented by “>” and is used between two selectors. In this, an element is only selected if the second selector is the direct child of the first selector element. This means there should not be any other selector between the first selector element and second element selector.
ul > li {
border-top: 5px solid red;
}
<ul>
<li>Unordered item</li>
<li>Unordered item
<ol>
<li>Item 1</li>
<li>Item 2</li>
</ol>
</li>
</ul>
In this example, the <li>
element with the text “Unordered item” will have a red top border.
Adjacent sibling combinator
The adjacent sibling combinator is represented by “+” is placed between the two CSS selector. In this element is selected if the selector element is directly followed by the first element selector or only the adjacent sibling
h1 + span {
font-weight: bold;
background-color: #333;
color: #fff;
padding: .5em;
}
<div>
<h1>A heading</h1>
<span>Veggies es bonus vobis, proinde vos postulo essum magis kohlrabi welsh onion daikon amaranth tatsoi tomatillo
melon azuki bean garlic.</span>
<span>Gumbo beet greens corn soko endive gumbo gourd. Parsley shallot courgette tatsoi pea sprouts fava bean collard
greens dandelion okra wakame tomato. Dandelion cucumber earthnut pea peanut soko zucchini.</span>
</div>
In this example, the first element will have the given CSS properties.
General sibling combinator
The general sibling combinator is represented by “~“. It selects all the sibling element, not only the direct sibling element, then we use the general sibling combinator.
h1 ~ h2 {
font-weight: bold;
background-color: #333;
color: #fff;
padding: .5em;
}
<article>
<h1>A heading</h1>
<h2>I am a paragraph.</h2>
<div>I am a div</div>
<h2>I am another paragraph.</h2>
</article>
In this example, every <h2>
element will have the given CSS properties.
CSS combinators provide powerful ways to select and style content based on their relationships in the HTML structure. By understanding combinators, we can create clean, maintainable, and responsive web designs.
Cheers!
References – MDN Web Docs
Understanding massive ZeroDay impacting Dogecoin and 280+ networks including Litecoin and Zcash
Shivam SoniWhat is stopping us from using free software?
Robin SchubertI had a funny day yesterday
I'll start with the evening, that I spend as tutor in the RoboLab; a workshop for kids aged 10-18 to build their own robot with some 3d printed parts, an ESP and everything you need of electric equipment to make some wheels move. It's a great project and I have much respect for the people who initiated and still maintain it in their free time with the children.
The space we can use for the project is called Digitallabor (digital lab) and offers anything you would want from a well equipped maker space, including a shelf full of laptops to use while you're working there.
I should not be surprised anymore, but I can't help it
Of course, all laptops run Windows. I picked one, booted up and and saw the fully bloated and ad-loaded standard installation of Windows 10. Last time a search for updates had been performed: early 2019. No customized privacy settings, nothing. Just the standard installation in all its ugliness.
I asked the people running the space why. Why? As this would be the perfect place to introduce the children to free software and even shed some light upon the difference between Free and Open Source Software and proprietary, user despising spyware (of course I did ask in a somewhat more diplomatic manner).
The answers: "The children are used to it.", "It's easier to maintain."
Yes. So last search for updates 2019. That's well maintained.
Regarding the "the children are used to it": I can confirm that children don't give a shit. If it runs Minetest then it's fine. If they have access to a PC or laptop at home at all, because in my experience most of the kids nowadays have exactly two digital media skills anyway: tapping and swiping. So this would be the perfect place to introduce them to free alternatives!
The morning was different
We're a small company with only ~12 employees, most of which are rather non-technical. So there is no IT department. Or in other words: I am the IT department. And our IT department finds it is no longer responsible to run Windows on business PCs (at least in the world outside the US). So yesterday I prepared a new PC with Fedora 38, brought it to my colleagues and asked: "Who dares to try this Linux Desktop?"
Guess who stepped forward instantly and said, "I can do that"? My ~60 year old colleage who was a medical technical assistent when I wasn't even born and a life-long Windows user. We did the initiall configuration, synced her mails and calendars, set-up printers and network drives and went through the most important peculiarities of the GNOME3 desktop. It took about 90 minutes and then she said "I guess I'm fine from here. I'll play around with this a bit to get used to the new apps". I promised her first-level support but she was working without any issues the whole day.
I'm really proud of her
So many people keep telling me it would be too hard, too much reorientation to switch operating systems, but moments like that show me that the problem may lie somewhere else. People are afraid of changes. People want to spare the effort. But I think that daring to make a change instead of doing nothing despite better knowledge will be rewarded. The next desktop PC is already prepared, so next week I will ask the question again :)
Google Open Source Peer Bonus Award 2023
Saptak SenguptaI am honored to be a recipient of the Google Open Source Peer Bonus 2023. Thank you Rick Viscomi for nominating me for my work with the Web Almanac 2022 project. I was the author of Security and Accessibility chapters of the Web Almanac 2022.
For the last year, I have started to spend more time in contributing, maintaining and creating Open Source project and reduced the amount of contracts I usually would do. So this letter of appreciated feels great and helps me get an additional boost in continuing to do Open Source Projects.
Some of the other Open Source projects that I have been contributing and trying to spend more time on are:
In case someone is interested in supporting me to continue doing open source projects focused towards security, privacy and accessibility, I also created a GitHub Sponsors account.
Converting HTML Tables to CSV
Sandeep ChoudharyToday, I decided to analyze my bank account statement by downloading it from the day I opened my bank account. To my surprise, it was presented as a web page. Initially, my inner developer urged me to write code to scrape that data. However, feeling a bit lazy, I postponed doing so.
Later in the evening, I searched the web to find an alternate way to extract the data and discovered that HTML tables can be converted to CSV files. All I had to do was save the code in CSV format. I opened the Chrome browser's inspect code feature, copied the table, saved it with the CSV extension, and then opened the file with LibreOffice. Voila! I had the spreadsheet with all my transactions.
Cheers!
Mastering Async Communication in a Remote World
Bhavin GandhiThank you, my VMware team!
Priyanka SagguDecember 12, 2022
Dear Team,
As my last day at VMware approaches, I wanted to take a moment to thank each and every one of you for the support and guidance you have given me during my time at VMware.
To dims and Navid, I am especially grateful for helping me join the great organisation and for your ongoing support. Thank you for making me feel welcomed and valued from day one.
Nikhita, your support and sponsorship have been invaluable in helping me grow in my career. You are not only a great colleague, but also a wonderful friend inside and outside of VMware. I truly mean it when I say that YOU ARE MY ROLE MODEL.
Meghana, thank you for being an amazing onboarding buddy and for being there for me through every challenge and success. Your friendship, kindness and selflessness means the world to me.
Arka and Yash, thank you for being amazing work partners and for the countless long troubleshooting and learning sessions we had together. I will miss working with you.
Nabarun, thank you for being an exceptional mentor and guiding me not only on technical matters, but also providing valuable advice and teaching me important soft skills.
Madhav, thank you for being such a kind-hearted person and always supporting me and cheering me on.
Anusha, Christian, Prasad, Akhil, Arnaud, Rajas, and Amit, thank you for sharing your wealth of professional experience with me and especially, for teaching me what it means to work hard. It has been an absolute honor to work with each of you, even if for a short time.
Finally, Kriti, Kiran, and Gaurav, thank you for supporting me throughout my journey at VMware.
Andrew, Dominik, Peri, Sayali, I never could have imagined finding such wonderful friends at VMware. I will deeply miss you all. Your friendship means so much to me.
Thank you all for being such a great team. I will always treasure the memories and the lessons I have learned here.
Best regards,
Priyanka Saggu
PS:
It’s amazing that the “DREAM TEAM” tweet that you posted about years ago, Nikhita, actually came together for me and I got to work with you. It’s still hard to believe it actually happened. Honestly, I’m feeling very emotional after typing this. Thank you for all your support always! ❤️
My first custom Fail2Ban filter
Robin SchubertOn my servers that are meant to be world-accessible, the first things I set up are the firewall and Fail2Ban, a service that updates my firewall rules automatically to reject requests from IP addresses that have failed repeatedly before. The ban duration and number of failed attempts that trigger a ban can easily be customized; that way, bots- and hacker attacks that try to break into my system via brute force and trial and error can be blocked or at least delayed very effectively.
Luckily, many pre-defined and modules and filters already exist that I can use
to secure my offered services. To set up a jail for sshd
for instance and do
some minor configurations, I only need a few lines in my
/etc/fail2ban/jail.local
file:
[DEFAULT]
bantime = 4w
findtime = 1h
maxretry = 2
ignoreip = 127.0.0.1/8 192.168.0.1/24
[sshd]
enabled = true
maxretry = 1
findtime = 1d
Just be aware that you should not change /etc/fail2ban/jail.conf
, as this will
be overwritten by fail2ban. If a jail.local
is not already present, create one.
As you can see, I set some default options about how long IPs should be banned
and after how many failed tries. I also exclude local IP ranges from bans, so
I'll not lock myself out every time I test a new service or setting. However, for
sshd
I even tighten the rules a bit, since I only use public key
authentication where I don't expect a single failure from a client that is
allowed to connect. All the others can happily be sent to jail.
It's always a joy but also kind of terrifying to check the jail for the currently banned IPs; the internet is not what I would call a safe place.
sudo fail2ban-client status sshd
Status for the jail:
|- Filter
| |- Currently failed: 0
| |- Total failed: 211
| `- File list: /var/log/auth.log
`- Actions
|- Currently banned: 2016
|- Total banned: 2202
`- Banned IP list: ...
My own filter
Do identify IP addresses that should be banned, Fail2Ban scans the appropriate
log files for failed attempts with a regular expression, as the sshd
module
does with my /var/log/auth.log
.
Like mentioned above, there are already quite some pre-defined modules. For my
nginx
reverse proxy the modules nginx-botsearch
, nginx-http-auth
and
nginx-limit-req
are available; the log files they scan by default is
/var/log/nginx/error.log
.
However, having a look in my /var/log/nginx/access.log
I regularly find lots of
failed attempts that are probing my infrastructure. They look like this:
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.7/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.4/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysqladmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/myadmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.1.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.9.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.8.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.0/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
185.183.122.143 - - [30/Sep/2022:01:19:48 +0200] "GET /wp-login.php HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:96.0) Gecko/20100101 Firefox/96"
198.98.59.132 - - [30/Sep/2022:01:51:59 +0200] "POST /boaform/admin/formLogin HTTP/1.1" 404 134 "http://xxx.xxx.xxx.xxx:80/admin/login.asp" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /.env HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /config.json HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /.git/config HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
I don't use PhpMyAdmin and I don't host a wordpress site (requests to wp-login
and wp-admin are pretty common) and I would prefer to ban IPs that scan my
infrastructure for these services. So I wrote a new filter to scan my
nginx/access.log
file for requests of that kind.
In /etc/fail2ban/filters.d/nginx-access.conf
I added the following definition:
[Definition]
_daemon = nginx-access
failregex = (?i)^<HOST> .*(wp-login|xmlrpc|wp-admin|wp-content|phpmyadmin|mysql).* (404|403)
(?i)
makes the whole regular expression case insensitive, so it will capturephpmyadmin
andPhpMyAdmin
equally.^<HOST>
will look from the start of each line to the first space for the IP address.<HOST>
is a defined capture group from Fail2Ban, that must be present infailregex
es to let Fail2Ban know who to ban..*
matches any character, and an arbitrary number of them(wp-login|wp-admin...)
these are the request snippets to look for; in parentheses and separated with the pipe operator, it will look for matches of either of the given strings.(404|403)
are http responses for "file/page not found" and "forbidden". So if these pages are not available or not meant to be accessed, this rule will be triggered.
In my jail.local
I add the following section to use the new filter:
[nginx-access]
enabled = true
port = http,https
filter = nginx-access
logpath = /var/log/nginx/access.log
Restart the fail2ban service (e.g. systemctl restart fail2ban
) to enable the
new rule.
I started with only a few keywords to filter, but the used regular expression can easily be expanded by further terms.
Hope - Journal
Sayan ChowdhuryThis is accompanied with Hope
This is to maintain a daily journal of the efforts. This is to be seen from bottom to top.
To explain the jargons:
- cw: current weight
- gw: goal weight
Ok, lets start.
September 12, 2022
- cw: 80.3 kgs
Starting off things again, After coming back to Bangalore, I had a tough time setting things up, so mostly was eating food from outside and gained like 10kgs in the past month, haha!
This time it’s bit different from the last as I will be hitting the gym, and walking as well. Let’s see how it goes!
To begin, I started with a 15min walk on treadmill with 6kmph and incline 10. In the evening, I went for a walk of 1.53km with a pace of 10:05min/km
I chugged a total of 1.25L of water yesterday. Need to bring this up to 4L a day.
Connecting Laptop and Phone | KDE Connect
Shivam SoniSubscriptions
- Abhilash Raj
- Anu Kumari Gupta (ann)
- Anurag
- Anurag Kumar
- Anwesha Das
- Armageddon
- Arnau Orriols
- Arpita Roy
- Arun Sag
- Bhavin Gandhi
- Bijra HIgh School
- Chandan Kumar
- Darshna Das
- Dhriti Shikhar
- Farhaan Bukhsh
- Frederick "FN" Noronha
- Indranil Das Gupta
- Jason Braganza
- Kenneth Gonsalves
- Kuntal Majumder
- Kushal Das
- Nabarun Pal
- Oindrila Gupta
- Pallavi Chaurasia
- Pradeepto Bhattacharya
- Pradyun Gedam
- Praveen Kumar
- Priyanka Saggu
- Rahul Jha
- Rahul Sundaram
- Ratnadeep Debnath
- Robin Schubert
- Runa Bhattacharjee
- Samikshan Bairagya
- Sandeep Choudhary
- Sanjiban Bairagya
- Sankarshan Mukhopadhyay
- Sanyam Khurana
- Saptak Sengupta
- Sayamindu Dasgupta
- Sayan Chowdhury
- Shakthi Kannan
- Shivam Soni
- Soumya Kanti Chakraborty
- Souradeep De
- Stéphane Péchard
- Subho
- Suraj Deshmukh
- Tosin Damilare James Animashaun
- Trishna Guha