Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

Running Snowflake proxy

Snowflake is a technology to allow people from all over the world to access censored applications and websites.

Similar to how VPNs assist users in getting around Internet censorship, Snowflake helps you avoid being noticed by Internet censors by making your Internet activity appear as though you&aposre using the Internet for a regular video or voice call.

I am running a Snowflake proxy for sometime now. Installed on a server using the Ansible role. I also sent in couple of patches to the role.

$  systemctl status snowflake-proxy.service 
● snowflake-proxy.service - snowflake-proxy
     Loaded: loaded (/etc/systemd/system/snowflake-proxy.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-09-29 07:39:32 UTC; 4 days ago
       Docs: man:snowflake-proxy
             https://snowflake.torproject.org/
   Main PID: 1188368 (proxy)
      Tasks: 9 (limit: 9495)
     Memory: 138.8M
        CPU: 2h 50min 12.358s
     CGroup: /system.slice/snowflake-proxy.service
             └─1188368 /usr/bin/proxy -capacity 0

Oct 03 02:39:33 XXXX proxy[1188368]: 2022/10/03 02:39:33 In the last 1h0m0s, there were 5 connections. Traffic Relayed ↑ 61 MB, ↓ 7 MB.
Oct 03 03:39:33 XXXX proxy[1188368]: 2022/10/03 03:39:33 In the last 1h0m0s, there were 5 connections. Traffic Relayed ↑ 4 MB, ↓ 714 KB.
Oct 03 04:39:33 XXXX proxy[1188368]: 2022/10/03 04:39:33 In the last 1h0m0s, there were 5 connections. Traffic Relayed ↑ 25 MB, ↓ 2 MB.
Oct 03 05:30:51 XXXX proxy[1188368]: sctp ERROR: 2022/10/03 05:30:51 [0xc0014eea80] stream 1 not found)
Oct 03 05:30:51 XXXX proxy[1188368]: sctp ERROR: 2022/10/03 05:30:51 [0xc0014eea80] stream 1 not found)
Oct 03 05:39:33 XXXX proxy[1188368]: 2022/10/03 05:39:33 In the last 1h0m0s, there were 12 connections. Traffic Relayed ↑ 39 MB, ↓ 7 MB.
Oct 03 06:39:33 XXXX proxy[1188368]: 2022/10/03 06:39:33 In the last 1h0m0s, there were 17 connections. Traffic Relayed ↑ 83 MB, ↓ 19 MB.
Oct 03 07:39:33 XXXX proxy[1188368]: 2022/10/03 07:39:33 In the last 1h0m0s, there were 13 connections. Traffic Relayed ↑ 180 MB, ↓ 26 MB.
Oct 03 08:39:33 XXXX proxy[1188368]: 2022/10/03 08:39:33 In the last 1h0m0s, there were 17 connections. Traffic Relayed ↑ 101 MB, ↓ 99 MB.
Oct 03 09:39:33 XXXX proxy[1188368]: 2022/10/03 09:39:33 In the last 1h0m0s, there were 32 connections. Traffic Relayed ↑ 238 MB, ↓ 21 MB.

You can even run a proxy in your browser.

by Anwesha Das at October 03, 2022 09:56 AM

My first custom Fail2Ban filter

On my servers that are meant to be world-accessible, the first things I set up are the firewall and Fail2Ban, a service that updates my firewall rules automatically to reject requests from IP addresses that have failed repeatedly before. The ban duration and number of failed attempts that trigger a ban can easily be customized; that way, bots- and hacker attacks that try to break into my system via brute force and trial and error can be blocked or at least delayed very effectively.

Luckily, many pre-defined and modules and filters already exist that I can use to secure my offered services. To set up a jail for sshd for instance and do some minor configurations, I only need a few lines in my /etc/fail2ban/jail.local file:

[DEFAULT]
bantime  = 4w
findtime = 1h
maxretry = 2
ignoreip  = 127.0.0.1/8 192.168.0.1/24


[sshd]
enabled   = true
maxretry  = 1
findtime  = 1d

Just be aware that you should not change /etc/fail2ban/jail.conf, as this will be overwritten by fail2ban. If a jail.local is not already present, create one.

As you can see, I set some default options about how long IPs should be banned and after how many failed tries. I also exclude local IP ranges from bans, so I'll not lock myself out every time I test a new service or setting. However, for sshd I even tighten the rules a bit, since I only use public key authentication where I don't expect a single failure from a client that is allowed to connect. All the others can happily be sent to jail.

It's always a joy but also kind of terrifying to check the jail for the currently banned IPs; the internet is not what I would call a safe place.


sudo fail2ban-client status sshd
Status for the jail:
|- Filter
|  |- Currently failed: 0
|  |- Total failed:     211
|  `- File list:        /var/log/auth.log
`- Actions
   |- Currently banned: 2016
   |- Total banned:     2202
   `- Banned IP list: ...

My own filter

Do identify IP addresses that should be banned, Fail2Ban scans the appropriate log files for failed attempts with a regular expression, as the sshd module does with my /var/log/auth.log.

Like mentioned above, there are already quite some pre-defined modules. For my nginx reverse proxy the modules nginx-botsearch, nginx-http-auth and nginx-limit-req are available; the log files they scan by default is /var/log/nginx/error.log.

However, having a look in my /var/log/nginx/access.log I regularly find lots of failed attempts that are probing my infrastructure. They look like this:

118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.7/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.4/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysqladmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/myadmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.1.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.9.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.8.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.0/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
185.183.122.143 - - [30/Sep/2022:01:19:48 +0200] "GET /wp-login.php HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:96.0) Gecko/20100101 Firefox/96"
198.98.59.132 - - [30/Sep/2022:01:51:59 +0200] "POST /boaform/admin/formLogin HTTP/1.1" 404 134 "http://xxx.xxx.xxx.xxx:80/admin/login.asp" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /.env HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /config.json HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /.git/config HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"

I don't use PhpMyAdmin and I don't host a wordpress site (requests to wp-login and wp-admin are pretty common) and I would prefer to ban IPs that scan my infrastructure for these services. So I wrote a new filter to scan my nginx/access.log file for requests of that kind.

In /etc/fail2ban/filters.d/nginx-access.conf I added the following definition:

[Definition]

_daemon = nginx-access
failregex = (?i)^<HOST> .*(wp-login|xmlrpc|wp-admin|wp-content|phpmyadmin|mysql).* (404|403)
  • (?i) makes the whole regular expression case insensitive, so it will capture phpmyadmin and PhpMyAdmin equally.
  • ^<HOST> will look from the start of each line to the first space for the IP address. <HOST> is a defined capture group from Fail2Ban, that must be present in failregexes to let Fail2Ban know who to ban.
  • .* matches any character, and an arbitrary number of them
  • (wp-login|wp-admin...) these are the request snippets to look for; in parentheses and separated with the pipe operator, it will look for matches of either of the given strings.
  • (404|403) are http responses for "file/page not found" and "forbidden". So if these pages are not available or not meant to be accessed, this rule will be triggered.

In my jail.local I add the following section to use the new filter:

[nginx-access]
enabled   = true
port      = http,https
filter    = nginx-access
logpath   = /var/log/nginx/access.log

Restart the fail2ban service (e.g. systemctl restart fail2ban) to enable the new rule.

I started with only a few keywords to filter, but the used regular expression can easily be expanded by further terms.

by Robin Schubert at October 01, 2022 12:00 AM

Restart Emacs on System Startup

This post was originaly published on 2022-09-27.
Updated 2022-09-28, to include the improved script.


It’s been a year, since I figured out the hack that finally let me use my Compose key in Emacs.1
Without it, I am unable to type any kind of quotation marks or umlauts in emacs.
A year in, and I’m tired of restarting the service everytime the machine comes on.
The computer can do that for me.

So I whipped up this tiny bash script

1
2
3
4
5
6
#!/usr/bin/env bash
if [[ $(( $(pidof -s emacs) )) -le 10000  ]]; then
	systemctl --user restart emacs; 
else
	:;
fi

It just arbitrarily checks for the Emacs service’s PID and then if it’s lesser than or equal to 10000, it goes ahead and restarts the service.
This is just some superstitious thing I did, because when I was troubleshooting all those months ago, I assumed that if Emacs started at an ID in the low thousands, it must not have been loading something and restarting it to get a higher process ID meant what ever it needed must have loaded.

I learnt a couple of things doing it this way too.

  1. The output of a bash command is a string.
  2. I use $(( some_numerical_string )) to convert said string into an integer
  3. : means pass in bash-speak

What I ought to do is to check for the existence of a PID and if there is one, to then go ahead and restart emacs. I’ll do that some other time. Because right now, I put this script into my system startup and it just works.


Update 2022-09-28

I obviously am incapable, of leaving well enough alone.
As I was in the bath this morning, it struck me that I was restarting the service using systemctl restart …
So I could just check for the exit status of a systemctl status command, and then depending on what I got, do something. If it was running, I could just restart it.
I checked on the command line and sure enough, if a service is running, the exit status is 0 and if it isn’t, the status is 3 .2
So … then I monkeyed with the script to this

1
2
3
4
5
6
7
#!/usr/bin/env bash

if systemctl --user status emacs > /dev/null; then
    systemctl --user restart emacs;
else
    :;
fi;

If the emacs service is running, restart it, else do nothing.
This ought to quiet the monkey brain for a while.
I hope 🙂


P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


  1. all i have to do is restart my Emacs service as soon as the system boots up ↩︎

  2. I don’t know if it is always 3. It could probably vary. But it suffices in my case that it isn’t 0. ↩︎

September 28, 2022 05:30 AM

11

As is our wont, we spent the day gallivanting in a jungle and having an accident.
And then crying for a while and then laughing about what a story it would be for the rest of our lives.

Much like our marriage.
We make the best stories, don’t we?
And I am who I am, because of you.
Eleven years, and it still feels like only yesterday. 1
I love you.



Like the old hymn goes …

You alone are my strength, my shield … to you alone may my spirit yield
You alone are my hearts desire … 🎵



  1. We still compare our wedding pics with other folks, unilaterally declaring ours the much better set. ↩︎

September 27, 2022 02:39 PM

Blocks in Org Mode

I remember, when I first learned it, the Org Manual mentioning I could have code, quotes, poetry and sundry self structured blocks of text, where the text in that block would flow like I wanted it to. I could have indentation or line breaks as I pleased.
And then I promptly forgot about it.

The only thing I did remember were code blocks.
And that I needed to do a #+begin_src and then a #+end_src and put my code in the middle. And all this while, I would keep typing it in, by hand.

Until I tired of keeping on doing that shit, because there are more and more notes that are now going into my zettelkasten and decided well, the computer can do that for me.
So, I went back to the Org Mode documentation and discovered Structure Templates.
And now, I am at peace!

Turns out, all I needed was (org-insert-structure-template) aka C-c C-,
Here’s some code I’ve selected in a document


Now I hit C-c C-,, which brings up a whole host of options, with a prompt!
Do I want this block of text to be a quote? Or some verse?
Well, this is just boring code, so I choose source with the s key


And tada! The block is surrounded by the begin and end tags, and is now a source block! I could add the programming language after #+begin_src to get highlighting as well, but that’s a story for another day.

I’m practicing this, so that I get C-c C-, into my muscle memory, because be it verse, quote, code, example or exports, I know I will be making heavy use of Structure Templates.


P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.

September 27, 2022 08:37 AM

Links to Writing of Interest

Been meaning to write up a short note to some of my writing to point folks to.

So here goes. This serves as a mélange of the things I write and interest me.

  1. I’ve coauthored a book on linux, Linux for You and Me
  2. My blog at Janusworx
  3. I have written long form articles, to serve as teaching material for Linux Users’ Group of Durgapur

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.

September 25, 2022 11:51 AM

Bare minimum vim setup for YAML

I am spending my days with Ansible and Kubernetes. This means I am writing YAML files all day long. I invariably spend a significant portion of my working hours fixing indentations in the YAML files. I am sure this is the case with many DevOps engineers these days.

I found the following plugins vim-plug and indentLine. vim-plug is a minimalist Vim plugin manager. And indentLine is used for displaying thin vertical lines at each indentation level for code indented with spaces. I did set up both the plugins in the following way:

Edit the .vimrc file

Add the following lines to the .vimrc file to make sure that I use 2 spaces indentation from YAML file.

filetype plugin indent on
autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab

Add the plugins .vimrc file to add both the plugins

call plug#begin()
Plug &aposYggdroot/indentLine&apos
call plug#end()

Final output:

Now when I was trying to edit a YAML file in vim, it is all indented (visually).

 12 ---
 11 - name: Apt update
 10   ansible.builtin.apt:
  9   ¦ update_cache: yes
  8
  7 - name: Get list of updates
  6   ansible.builtin.command:
  5   ¦ cmd: apt list --upgradable
  4   register: update_list
  3
  2 - name: Print the update_list
  1   ansible.builtin.debug:
13    ¦ msg: "{{ update_list.stdout_lines }}"

I hope this super minimal vim setup will be helpful to others.

by Anwesha Das at September 22, 2022 04:12 PM

Exclude Files and Folders From an Rsync Copy

I’ve been sticking to plain old rsync -az to sling files around using Rsync.
Until I ran into a hiccough today, where I filled up my teensy remote storage on the Pi, because a couple of subdirectories that were part of the run, were hundreds of megabytes large.1

So I did the usual, did hunt around the web thing, and learnt about --exclude

So now the new Rsync command is …

rsync -az src dest --exclude={'excluded_dir_1','excluded_dir_2'}

And my poor Pi no longer has disk full nightmares.

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.


  1. I’d expected the normal run to be a few hundred kilobytes in size. ↩︎

September 18, 2022 02:13 PM

Hope - Journal

This is accompanied with Hope

This is to maintain a daily journal of the efforts. This is to be seen from bottom to top.

To explain the jargons:

  • cw: current weight
  • gw: goal weight

Ok, lets start.

September 12, 2022

- cw: 80.3 kgs

Starting off things again, After coming back to Bangalore, I had a tough time setting things up, so mostly was eating food from outside and gained like 10kgs in the past month, haha!

This time it’s bit different from the last as I will be hitting the gym, and walking as well. Let’s see how it goes!

To begin, I started with a 15min walk on treadmill with 6kmph and incline 10. In the evening, I went for a walk of 1.53km with a pace of 10:05min/km

I chugged a total of 1.25L of water yesterday. Need to bring this up to 4L a day.

September 12, 2022 12:00 AM

khata, under WASI

While I am learning about WebAssembly slowly, I was also trying to figure out where all I can use it. That is the way I general learn all new things. So, as a start, I thought of compiling my static blogging tool khata into WASM and then run it under WASI.

While trying to do that, the first thing I noticed that I can not just create a subprocess (I was doing it to call rsync internally), so that was top priority thing to fix in the list. First, I tried to use rusync crate, but then it failed at runtime as it is using threads internally. After asking around a bit more in various discord channels, I understood that the easiest way would be to just write that part of the code myself.

Which is now done, means this blog post is actually rendered using wasmtime.

wasmtime --dir=. khata.wasm

I am very slowly learning about the SPEC, and the limitations. But, this is very interesting and exciting learning journey so far.

September 10, 2022 04:21 PM

Connecting Laptop and Phone | KDE Connect

I wanted an application which connects my phone with my laptop so that I can get important notifications from my phone to my laptop. First, I tried scrapy; however, It was more like remote desktop protocol, so it was not fit for my purpose. Then I found KDE Connect. I found it useful for my case, as it uses wifi for connection.

KDE Connect mobile application has a lot of features.

Using KDE, I can send my clipboard from my phone to my laptop and vice-versa; I can even send files to my laptop. The feature I most liked was the slideshow remote using this one can control ppt remotely from their phone. However, the remote input feature is not working for my device. I will try to fix it if possible.

The feature I like in the desktop client is Ring device; due to this, I can always find my device if it’s silent, and my phone is connected to wifi easily. However, if my device is not connected to the internet, it is a hassle to find it.

by Shivam at September 06, 2022 06:12 PM

"Python for Everyone": learning Python

I conceptualized "Python for Everyone" to help women who want to start their careers in technology. After the introduction sessions, it was time for us to learn real "Technology". What better to start with than Python?

Python 101 Part I in Sunet

Sunet has been a great support to us since the program&aposs very initiation. It was our third consecutive event happening there. We can not thank them enough for their support and encouragement to us. Especially when it was Summer, and no place was available, Sunet gave us space to start.

august_pysthlm

There is a hidden agenda of the program, which is to give first-time speakers a platform to start their speaking journey. Our speakers for Python for 101 were Kitty Deepa and Zahra. When I approached them for the Python 101 session, they were like, "Ahh, umm, not sure," but I said, "you girls can do this". I am so proud to see how well they did it. However, the idea was to complete the whole Python 101 on that very day. But they could not finish it due to time constrain. So we decided to have Part II of that.

kitty

Python 101 Part II in Microsoft Reactor

The next session was at Microsoft Reactor. Microsoft Reactor and PyLadies Stockholm have tried to make this collaboration happen for a long time. At the very beginning, when I was trying to look for places to host our PyLadies meet-up, Christine (as always) came as a savior and introduced me to Erika. And now, finally, we could have the meet-up at Microsoft Reactor.

pyladies_sep

Korey was our first speaker. In his talk, he told us what Microsoft Reactor is, its idea, and why they are interested in the PyLadies community. We loved your talk Korey. And soon, he will again come back with another exciting talk.

2022_sep_pysthlm_korey

The next session was mine. To be better at programming, we must do the same thing repeatedly. So in this revision session together, we were brushing up on our Python, which we learned the other day. I covered String Manipulation, Data Structures, Variable, and Python basics.

pyladiesaug2022an

In the final Python 101 Part II session, Kitty Deepa and Zahra started where they left the previous session. I want to thank them for their sessions.

Thank you, Erika

I want to mention a particular person here, Erika. She is always so wonderful and helpful. Some valuable and fruitful discussions happened that day at the event. There is some more exciting collaboration coming soon :)

by Anwesha Das at September 05, 2022 10:55 AM

Using ssh-keygen for generating ssh keys

I wanted to create a ssh key for the web server ssh login. I searched and found this article.

I have used ssh-keygen in the past by reading a tutorial, but this time I wanted to learn about it properly

After reading the article, I learned I could select the cryptographic algorithm for generating the key.

ssh-keygen -t algotype -b keysize -f /path/filename

I tried generating the keys with different algorithms and different key sizes. For some algorithms, key sizes are fixed however, for some, key sizes are not fixed. If I enter a large key size, it will take significant time to generate the key.

I will soon try generating the keys using my hardware key, as it can eliminate the need for passphrases.

by Shivam at September 04, 2022 06:07 PM

The Shift

My Nani (maternal grandmother) used to tell me about her experiences and story of the village in which she used to live with her inlaws. She said that women of that time asked each other in the evening if their horses had arrived at their homes. I asked what that meant, and she explained that people used to travel by horse, so by asking about the horse, the ladies meant if the family member or the husband of the lady she asked the question had come or not as travels used to take days at that time. She mentioned everyone used to have a horse at home as we have bikes and cars now. Then they were replaced by bicycles and new automobiles.

I have observed that the things everyone owned in the past owned have been replaced by new technology, and the old thing has become a luxury. For example, everyone used to have a bicycle at home, but now they are gone, owned mainly by upper-class people. The same goes with horses once; almost everyone owned a horse now majorly rich people own horses for various things.

There will be many things we are using right now, and they will soon be replaced by new things we will never notice until people start selling them as vintage items. There are many things like radio and cameras that are not easily available in the market right now however there are a lot of personal memories associated with those things. So I think that’s why people buy or have old items.

by Shivam at September 03, 2022 10:26 AM

Critical security alert | Failed login attempts detected

When I checked my email today, I received the following email from my bitwarden.


Additional security has been placed on your Bitwarden account.
We've detected several failed attempts to log into your Bitwarden account. Future login attempts for your account will be protected by a captcha.
Account: myemail
Date: Friday, September 2, 2022 at 1:49 PM UTC
IP Address: 223.196.188.87
If this was you, you can remove the captcha requirement by successfully logging in.
If this was not you, don't worry. The login attempt was not successful and your account has been given additional protection.

I had no clue why someone was trying to get into my bitwarden account; after a couple of hours, I received another email from google.

Some of your saved passwords were found in a data breach from a site or app that you use. Your Google Account is not affected.

To secure your accounts, Google Password Manager recommends changing your passwords now.

I used to use google password manager when I was new to the internet; however, I have left it because of security concerns. However, Google sends me an email if my password is found somewhere stored in the password manager. People try to get into my crypto accounts whenever they find my passwords online, but I don’t use those weak passwords. Initially, i didn’t know why people are trying to login into account’s however then i observed a pattern when i get email of Failed login attempt after some time, i got an email from haveibeenpwned in which they tell my email was found in some breach.

I was using three websites with the same password for which google notified me of the breach: uber, udemy and pomodone. So now i have to wait and see about the news which website got breached as i checked on haveibeenpwned they didn’t report any new breaches.

I am relieved as I have improved my security hygiene by using unique passphrases for different websites with a security key for 2fa.

by Shivam at September 02, 2022 05:44 PM

Johnnycanencrypt 0.9.0 release

3 days ago I released Johnnycanencrypt 0.9.0. Here is the changelog:

- Adds `setuptools-rust` as build system.
- Key.uids now contains the certification details of each user id.
- `merge_keys` in rjce now takes a force boolean argument.
- `certify_key` can sign/certify another key by both card and on disk primary key.

The first biggest change is related to build system, now we are using setuptools-rust to build. This change happened as dkg is working towards packaging the module for Debian.

The other big change is about certifying someone's key. We can use the primary key (either on disk or on Yubikey) to do the signing.

k = ks.certify_key(
    my_key,
    k,
    ["Kushal Das <kushaldas@gmail.com>", "Kushal Das <kushal@fedoraproject.org>"],
    jce.SignatureType.PositiveCertification,
    password=password,
)

In the above example I am signing two user ids of the key k using my_key with a PositiveCertification.

September 02, 2022 06:39 AM

How I started programming

My three kids are now 5, 7 and 9 years old. Meanwhile, they all have their own rooms which, for the two older school kids, contain desks with a Raspberry Pi 400 on them. They use it to look up pictures of Pokemon, to listen music and to play minetest, supertuxkart or the secret of monkey island :) Well, they also used for joining classes remotely during lockdown things.

My primary intention was, to make the computer accessible so - whenever their interest arouses - they could play around and discover on their own. Today I think that the number of possibilities may be way too high to just sit down and start with something specific.

I actually consider to run the Pis in some kind of kiosk mode, to reduce distraction. I remember that, on the first computer I used, we ran one program at a time. If you decided to run another program, you would turn off the computer, change the floppy and restart. Of course it's nice to have multiple things running at once on a computer, but to learn something new, I would argue that running one thing and only this one thing, might be best.

Our first family computer

Thinking back when I was their age, it must have been the time when my father received an old Amstrad PCW (Joyce) from a friend, our first computer.

I was fascinated by that machine, I loved the green on black text and the different noises it made - especially the dot matrix printer noises :D My father used it for word processing and because that was all he needed, it was all he ever tried. I also loved editing text in locoscript (which was just awesome) and playing the few games that were available.

However, the Joyce came with a BASIC and with the Logo programming language. I had no idea what either of them was, nor had anyone in our family. So one day I grabbed the manuals (which luckily were in German) and started learning Logo and running the examples until I was able to draw my own little pictures. In a playful manner I learned the concepts of algorithms; of variables, loops and subroutines.

At that time, BASIC was still incomprehensible for me. This changed when my parents, which wanted to foster my interest but didn't quite know how, gifted me a VTech SL, an educational computer that could not really do much, but came with a BASIC and a manual that was actually appropriate for children and that I could follow along nicely. So I soon had plenty of those little programs that would ask you for your name or age and then make funny comments about that. Always, my main motivation to write code was to eventually develop a cool game. Good for me that some of my friends shared that interest and one in particular I considered a real programming wizard.

Interest amplification through friends

When I was young, LAN parties were the real thing. I saved money for a then cheap Medion PC - an Intel Pentium D with an NVidia RIVA TNT graphics card. The only condition my parents put upon me was, that I would have pass an official typewriter's course - "The computer is not just a toy, learn touch-typing so you can use it for work/school".

So you would carry over your midi tower and 17 inch CRT monitor and a box of cables to a friend's basement and forget about daytime and the rest of the world for one weekend over Duke Nukem 3D, Starcraft and Jedi Knight - Dark Forces 2. The friend at whom we met was two years older and first impressed me when we were missing the last BNC terminator to finalize or LAN connection (yes, that's when before the time of Ethernet, all PCs had to be hooked up in line, connected by a coaxial cable and cleanly terminated on both ends). So he grabbed an ohmmeter, measured the resistance the terminator had to have, found a fitting resistor in a drawer and bent it into shape to close our network connection.

He was regularly programming in Pascal and I was blown away when he showed us his self-written window manager/desktop environment. It could not do too much, but show files as icons which you could nicely customize in color, but to me it was magic. Together we installed Borland Pascal on my machine and he showed me how to use the built-in documentation system. However, my English skills at that time were simply not good enough to really make sense of that excellent documentation. So I couldn't wait for the computer science course in school to start.

Two extremes of school computer science

Computer science. Awesome! I was so excited about it, that it hurt even more when we realized that it would be a complete disappointment. The first "computer science" course I had in school was nothing but a Microsoft Word/Excel/Powerpoint introduction, and not even a good one. Well, we endured and in the next year the teacher changed and so did the course. And that may have been the best class I've had, ever.

The new computer science teacher was also physics teacher and was not too famous with the kids. He had a quite nerdy 70s look which I appreciate today, but was inscrutable for us when we were young, and a funny name that translates to "beef". However, the topics he covered and the hands-on way he taught them were just great. Within two years we started with the basics of the Pascal programming language and the workings of computer algorithms with a Logo-like environment. After that we switched over to abstract data types (queues, lists, linked lists, trees etc.), computer architecture down to a level of "what does an ALU do, and how?", and finally we wrote our own assembly code to draw icons and images on the screen. That must have been in old unprotected mode, where you could just write into the video adapter memory directly, which was mapped into the PC's memory.

Soon enough we found us bumming instruction code lines from our assembly programs to find the most elegant and shortest solution to a problem, looking over each other's shoulders and admiring clever tricks. When I read Steven Levy's Hackers many years later, I perfectly remembered that feeling when reading about the first MIT hackers, hacking on the PDP-1.

We finished the course with a group project: We developed an idea for a 2D racing game we called "Geisterfahrer" (wrong-way driver) where the player had to dodge oncoming traffic. We identified the different tasks we had to do, planned what routines needed to be programmed and assigned teams. It didn't work out well, but hey, the concept was superb.

College, work and DGPLUG

I hate to admit it, but back then in my school days, I didn't like the computer science course very much. I simply could not appreciate the value of these lessons; I was bored by abstract data types, didn't know what I would ever need computer architecture knowledge for and was a bad team player in our final programming task. Only when I was in college and studied physics and computer science, I realized just how good this school course has been. In two years at college we covered exactly the same topics going just as deep, but this time I was in a course with ~200 people instead of just 20.

I learned Java and C/C++ basics at college, and when I applied for a project to write my bachelor's thesis, I was looking for programming tasks in physics working groups; there were and still are plenty of them. I did the same when I started my master's thesis, this time programming in Java and C# (just because the syntax was similar but the performance was way better), and after that once again the same to find a PhD position - this time in a medical field. I started to learn Python with Mark Pilgrim's Dive Into Python, which was an excellent choice for me, because it gave plenty of examples and comparisons with other programming languages I already knew.

There's not much interesting to say from that era except one thing: In terms of programming, I was still a bad team player. The code I wrote was hard to maintain; I wrote it alone and I wrote it for me to work. I imagine the poor people coming to the working groups to continue my work had hard times. I simply never learned how to collaboratively develop software - this part was actually not covered in college.

This only changed when I learned about DGPLUG and the summertraining, where - as I read - people were taught what need be known to start contributing to Open Source projects. I've written about that project before and every summer I realize how much it has changed the way I work today for the better. And it is only now, that I feel like I almost know what I am doing, and why, when I write code.

by Robin Schubert at September 02, 2022 12:00 AM

Estimating the comparative value of INR decades ago vs Now.

Recently I read news about the demise of Mr Rakesh Jhunjhunwala; he was considered as Warren Buffet of India because of his skills & fortune in the stock market. At the time of his death, he had an estimated net worth of $3.8 billion. Then I read another article, “How did Rakesh Jhunjhunwala make his billions?”. This article mentions that Mr Rakesh Jhunjumwala started with 5000 rupees in 1985.

The next thought which crossed my mind was that 5000 INR is 62.5USD in the current market price. So I should invest more seriously in the market maybe I can make a fortune like him. But I was aware of basic economics because of a book I read, Day to Day Economics, so I knew the value of 5000 inr in 1985 must be more than 5000 in 2022.

I wanted to know how much I have to invest today if I have to match the investment of Mr Jhunjhunwala in 1985.

I searched for an inflation calculator tool that could tell me about the historical inflation in India so we could calculate then. I found inflationtool there. I entered the USD value of INR, i.e. 62.5 USD. But again, I realised I must check exchange rates to find the conversion value based on 1985. So I searched for historical conversion values and found the Wikipedia page.

In 1985 the avg value of USD/INR conversion was 1/12.2349. Then i divided 5000/12.2349 = 416.7 i approximated that value to 417 so that I could get the USD value for inflationtool. When I entered the value in the tool, it showed 5,866.05. So at the current price, 5866$ is equal to 467596.91.

In short, my dream of becoming a big bull is shattered until I earn more, as 5000 INR in 1985 is equivalent to 467596.91 INR in 2022.

I think I should write a program to automate this calculation process (if i am correct in my concept). I hope I will try to do this as an explorable explanation. If you don’t know what explorable explanations are, you can read about them on this blog and here is a page with many of them.

by Shivam at September 01, 2022 05:50 PM

The Debug Diary - Chapter I

The Debug Diary – Chapter I

Lately, I was debugging an issue with the importer tasks of our codebase and came across a code block which looks fine but makes an extra database query in the loop. When you have a look at the Django ORM query

jato_vehicles = JatoVehicle.objects.filter(
    year__in=available_years,<more_filters>
).only("manufacturer_code", "uid", "year", "model", "trim")

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
        <logic>
    ymt_key = (entry.year, entry.model, entry.trim_processed)
...

you will notice we are using only, which only loads the set of fields mentioned and deferred other fields, but in the loop, we are using the field trim_processed which is a deferred field and will result in an extra database call.

Now, as we have identified the performance issue, the best way to handle the cases like this is to use values or values_list. The use of only should be discouraged in the cases like these.

Update code will look like this

jato_vehicles = JatoVehicle.objects.filter(
    year__in=available_years,<more-filters>).values_list(
    "manufacturer_code",
    "uid",
    "year",
    "model",
    "trim_processed",
    named=True,
)

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
        <logic>
    ymt_key = (entry.year, entry.model, entry.trim_processed)
...

By doing this, we are safe from accessing the fields which are not mentioned in the values_list. If anyone tries to do so, an exception will be raised.

** By using named=True we get the result as a named tuple which makes it easy to access the values :)

Cheers!

#Django #ORM #Debug

August 30, 2022 07:34 AM

LDAP authentication on Home Assistant

Last week I wrote a few sentences about a beautiful script I found, to authenticate against an LDAP server, which could be used e.g. on the Home Assistant, a platform to manage home automation and the like. We deployed a Home Assistant instance at work, to monitor temperatures in various rooms and fringes, and to raise notifications and alarms, should temperatures exceed certain thresholds. All team members should be able to log into the system, using their central login credentials from the LDAP server.

Unforeseen difficulties

The shell script uses either of the command line utilities ldapsearch (from the openldap-clients package) or curl to make a request to the LDAP server, which requires a valid username and password. Both scripts will return an error code > 0 if something goes wrong; as usual, the exit code 0 will let us know if the command worked and thus if the username/password combination was correct. Further, the LDAP server can be queried for some extra attributes like the displayName or others, which can be mapped into the requesting system.

However, there was one issue I hadn't anticipated; neither ldapsearch nor curl compiled with LDAP support was available on the Home Assistant.

There are plenty of ways to deploy Home Assistant. We had a spare Raspberry Pi and decided to use the HassOS distribution that is recommended when installing on a Pi. HassOS (the Home Assistant Operating System) is a minimalistic operating system that deploys the individual modules of Home Assistant as containers. The containers that are deployed are usually built on Alpine images. However, there were two problems:

  1. Software that I would install in any container would not be persistent but vanish on every re-boot.
  2. I couldn't even locate, let alone access the correct container that does the authentication.

Trial and error

As proof of concept, I installed an SSH integration that would at least let me communicate with parts of the Home Assistant system via ssh. The ssh container per default also mounts the config and other persistent directories of Home Assistant.

So I downloaded the ldap-auth.sh script to the persistent config folder and started by adding the ldapsearch tool, with apk add openldap-clients and configured ldap-auth.sh until I was able to authenticate. I updated the Home Assessment config with an auth_provider section like this:

homeassistant:
  auth_providers:
    - type: command_line
      command: /config/scripts/ldap-auth.sh
      meta: true
    - type: homeassistant

Beware! Do include type: homeassistant in your list of auth providers or you will lock yourself out of the system if the script does not work correctly (just like I did).

After reloading the config, login with the command_line type of course failed, but I didn't find any logs that would propagate the error message, so I added some echo lines in the script manually, to find out that ldapsearch cannot be found by the authenticating container.

So I tried my luck with curl; however I could not make any reasonable request without the built-in LDAP support.

Build my custom curl

So I figured I basically had three possibilities:

  1. Using a different distribution of Home Assistant that I maybe would be able to control better
  2. request the feature of having openldap-clients baked into the container images, or build (and maintain) the image myself or
  3. build curl for my target container with all the needed functions linked statically into one binary.

I assumed that all containers in the Pi's Home Assistant ecosystem would be the same architecture, which is Alpine on aarch64 for the ssh container. So I installed all dependencies I needed on the ssh container, cloned the curl repo and started configuring, installing missing dependencies on the fly.

./configure --with-openssl --with-ldap --disable-shared

Choosing the ssl library is mandatory; --disable-shared should prevent the use of any shared library, so any dependency I had to install that would not be available on the target machine later.

The built went through and I had an LDAP enabled curl that I could test my requests with, so again I tinkered with the ldap-auth.sh script until it would succeed.

However, when used from the web interface it would not work, again, this time complaining about missing dependencies, which I thought I had all included.

Checking the compiled binary I found 769.4K, so much bigger than my 199K system curl, so something must have been linked statically. Looking up shared object dependencies revealed what was missing:

[core-ssh ~]$ ldd curl
        /lib/ld-musl-aarch64.so.1 (0x7f930c0000)
        libssl.so.1.1 => /lib/libssl.so.1.1 (0x7f92f76000)
        libcrypto.so.1.1 => /lib/libcrypto.so.1.1 (0x7f92d26000)
        libldap.so.2 => /lib/libldap.so.2 (0x7f92cc1000)
        liblber.so.2 => /lib/liblber.so.2 (0x7f92ca3000)
        libc.musl-aarch64.so.1 => /lib/ld-musl-aarch64.so.1 (0x7f930c0000)
        libsasl2.so.3 => /lib/libsasl2.so.3 (0x7f92c79000)

While this is still a lot less dependencies than my system installed curl:

=> ldd `which curl`
        linux-vdso.so.1 (0x00007ffc8fdb6000)
        libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fce55263000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007fce55057000)
        libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fce5502c000)
        libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fce5500a000)
        libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fce54fc9000)
        libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fce54fb6000)
        libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fce54f1f000)
        libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fce54c3f000)
        libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fce54bea000)
        libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fce54b41000)
        libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fce54b33000)
        libz.so.1 => /usr/lib/libz.so.1 (0x00007fce54b19000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fce55380000)
        libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fce5496b000)
        libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fce54892000)
        libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fce54862000)
        libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fce5485c000)
        libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fce5484d000)
        libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fce54846000)
        libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fce54831000)
        libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fce5480e000)

there were still way too many shared libraries involved for my taste.

I even asked in #curl in the libera net what I could have done wrong or misunderstood.


14:57:34    schubisu | hi everyone! I'm trying to build a statically linked curl
                     | and configured with `--with-openssl --with-ldap --disable-shared`.
                     | However, when I run the binary on another machine it says
                     | it cannot find the shared libraries libldap and liblber. Did I
                     | misunderstand static linking?
15:27:25      bagder | static linking is a beast

Well, it was nice to hear that it may not have been entirely my fault :) bagder pointed me to Static curl, a github repository that builds static releases for multiple platforms (YAY), but sadly also with disabled LDAP support (AWWW). Running the build script with LDAP enabled also didn't run through.

An ugly hack to the rescue

Having spent way too much time on this issue, I went ahead with something that may be an ugly hack, but it's also a "works for me": I had already copied the statically linked curl in the persistent config folder already, so I would just add the missing libraries there as well.

I figured that from the 7 shared dependencies, 4 were available in the standard Alpine image anyway, so I was missing only three files:

  • libldap.so.2
  • liblber.so.2
  • libsasl2.so.3

that I copied from my ssh container into the persistent storage. I adjusted the ldap-auth.sh script one last time to add one line:

export LD_LIBRARY_PATH="/config/scripts"

and that did the trick.

I also confirmed that on the fresh system after re-boot, everything is still in place and working beautifully :)

by Robin Schubert at August 26, 2022 12:00 AM

Introducing Blogging Friday

It's not that I don't have things to write about, in fact I learn interesting new things every week. I have however never integrated a dedicated time to write new posts in my weekly routine. So to not procrastinate any further, I start Blogging Friday right now with some things I did this week.

Lower the threshold for new posts

I'm using lektor as static site generator; it's lightweight and new posts are really quick to generate. All it takes is a new sub-folder in my blog directory, containing a contents.lr file with a tiny bit of meta information. Apparently this little effort is already enough to trigger my procrastination. So to get this hurdle out of the way a little shell script is quickly written:

#!/usr/bin/env bash
#filename: new_post.sh

if [ -z $1 ]; then
    echo "usage: $0 <title>"
    exit 1
fi

posttitle="$*"
basepath="/home/robin/gitrepos/myserver/blog/content/blog"
postdir=$(echo $posttitle | sed -e "s/ /_/g" | tr "[:upper:]" "[:lower:]")
fullpath="$basepath/$postdir"
postdate=$(date --iso)

if [ -e "$fullpath" ]; then
    echo "file or directory $postdir already exists"
    exit 2
fi

mkdir "$fullpath"
echo "
title: $posttitle
---
pub_date: $postdate
---
author: Robin Schubert
---
tags: miscellaneous, programming
---
status: draft
---
body:
" > "$fullpath/contents.lr"

echo "created empty post: $postdir"

LDAP authentication for random services

I've integrated a few web services in our intranet at work, like a self hosted gitlab server, a zammad ticketing system, nextcloud and the likes. One requirement to integrate well in our ecosystem, is the possibility to authenticate with our OpenLDAP server. Those services I configures so far all had their own way means to authenticate against LDAP; some need external plugins, some are configured in web interfaces and others in configuration files. However, honestly I never understood what they did under the hood.

I had a little epiphany this week, when I tried to integrate a homeassistant instance. Homeassistant does not have a fancy front-end to do this, instead this is realized with a simple shell script. There's an example on github which can be used and is actually not that hard to comprehend.

In summary what is does is to make a request to the LDAP server, either via ldapsearch (part of the openldap-tools package) or curl (needs to be compiled with LDAP integration). An example to make a request with ldapsearch could look like this:

ldapsearch -H ldap://ip.of.ldap.server \
    -b "CN=Users,DC=your,DC=domain,DC=com" \
    -D "CN=Robin Schubert,CN=Users,DC=your,DC=domain,DC=com" \
    -W

Executed from the command line, this will prompt for the user's password and make the request to the server. If everything works fine, the command will exit with exit code 0; if different from 0, the request failed for whatever reason. This result is passed on.

That's it. Nothing new. Why then didn't I think of such a simple solution? The request over ldapsearch can of course be further refined, adding filters and pipe the output through sed to map e.g. display names or groups and roles.

Playing with PGP in Python using PGPy

I was exploring different means to deal with electronic signatures in Python this week. First library I found was python-gnupg; I should have been more suspicious when I saw that the last update has been 4 years ago. They may be calling it pretty bad protocol for a reason. It is a wrapper around the gpg binary, using Python's subprocess to call it. This was not really what I wanted. For similar reasons, Kushal started johnnycanencrypt in 2020; a Python library that interfaces the Rust OpenPGP lib sequoia-pgp and which I'm yet to explore further.

A third option I found is PGPy, a pure Python implementation of OpenPGP. Going through the examples of their documentation it feels straight forward; for the relatively simple use case I have (managing keys, signing and verifying signatures), it should be perfectly usable.

That's been my week

Nothing of what I tried this week was groundbreaking or new, but it either interested me or was keeping me busy in some way. I wonder how statistics would look like if I would count how many times I look up the same issues and problems on the internet. Maybe writing down some of them will help me remember - or at least give me the possibility to look things up offline in my own records ;)

by Robin Schubert at August 19, 2022 12:00 AM

How to add a renew hook for certbot?

After moving the foss.trainingserver to a new location, we found that TLS certificate was expired. I had looked and figured out that though certbot renewed the certificate, it never reloaded the nginx.

Now to make sure that nginx is reloaded next time, we must add the renew hook in the /etc/letsencrypt/renewal/foss.training.conf under the [renewalparams]

renew_hook = service nginx reload

One must remember to update the path based on their DNS. Thank you Saptak for pointing to the expired certificate and mentioning that it is a common pain point for people. I hope that this will be helpful for people in the future.

by Anwesha Das at August 18, 2022 03:31 PM

johnnycanencrypt 0.7.0 released

Today I released Johnnycanencrypt 0.7.0. It has breaking change of some function names.

  • create_newkey renamed to create_key
  • import_cert renamed to import_key

But, the major work done are in few different places:

  • Handling errors better, no more normal Rust panics, instead providing better Python exceptions as CryptoError.
  • We can now sign bytes/files in both detached & in normal compressed binary form.
  • Signature can be done via smartcards, and verification works as usual.

In the Github release page you can find an OpenPGP signature, which you can use to verify the release. You can also verify via sigstore.

SIGSTORE_LOGLEVEL=debug python -m sigstore verify --cert-email mail@kushaldas.in --cert-oidc-issuer https://github.com/login/oauth johnnycanencrypt-0.7.0.tar.gz
DEBUG:sigstore._cli:parsed arguments Namespace(subcommand='verify', certificate=None, signature=None, cert_email='mail@kushaldas.in', cert_oidc_issuer='https://github.com/login/oauth', rekor_url='https://rekor.sigstore.dev', staging=False, files=[PosixPath('johnnycanencrypt-0.7.0.tar.gz')])
DEBUG:sigstore._cli:Using certificate from: johnnycanencrypt-0.7.0.tar.gz.crt
DEBUG:sigstore._cli:Using signature from: johnnycanencrypt-0.7.0.tar.gz.sig
DEBUG:sigstore._cli:Verifying contents from: johnnycanencrypt-0.7.0.tar.gz
DEBUG:sigstore._verify:Successfully verified signing certificate validity...
DEBUG:sigstore._verify:Successfully verified signature...
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): rekor.sigstore.dev:443
DEBUG:urllib3.connectionpool:https://rekor.sigstore.dev:443 "POST /api/v1/index/retrieve/ HTTP/1.1" 200 85
DEBUG:urllib3.connectionpool:https://rekor.sigstore.dev:443 "GET /api/v1/log/entries/362f8ecba72f4326972bc321d658ba3c9197b29bb8015967e755a97e1fa4758c13222bc07f26d27c HTTP/1.1" 200 None
DEBUG:sigstore._verify:Successfully verified Rekor entry...
OK: johnnycanencrypt-0.7.0.tar.gz

I took 8 months for this release, now time to write some tools to use it in more places :)

August 17, 2022 11:28 AM

dgplug mailing list has a new home

We were using the mailman2 instance provided by Dreamhost for many years as the mailing list for dgplug. But, over the years many participants had trouble with receiving emails. In the last few years, most emails were landing in spam.

So, we took the chance to move to a new mailing list, and also started working on the site to have CoC properly defined. To make things easier, we will just follow the PSF Code of conduct https://www.python.org/psf/conduct/, most of our members are already parts of various upstream communities. So, this will be nothing new for them. We will be also updating our sites to add details of a separate team who will handle CoC violation reports.

Summer Training will start from 25th July, so remember to join in the new mailing list before that. See you all on IRC #dgplug channel on Libera server.

July 16, 2022 08:01 AM

Using sigstore-python to sign and verify your software release

Sigstore allows software developers to quickly sign and verify the software they release. Many of the bigger projects use hardware-based OpenPGP keys to sign and release. But the steps used to make sure that the end-users are correctly verifying those signatures are long, and people make mistakes. Also, not every project has access to hardware smartcards, air-gapped private keys etc. Sigstore solves (or at least makes it way easier) these steps for most developers. It uses existing known (right now only 3) big OIDC providers using which one can sign and verify any data/software.

For this blog post, I will use the python tool called sigstore-python.

The first step is to create a virtual environment and then install the tool.

$ python3 -m venv .venv
$ source .venv/bin/activate
$ python -m pip install -r install/requirements.txt

Next, we create a file called message.txt with the data. This can be our actual release source code tarball.

$ echo "Kushal loves Python!" > message.txt

Signing the data

The next step is to actually sign the file.

$ python -m sigstore sign message.txt 
Waiting for browser interaction...
Using ephemeral certificate:
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

Transparency log entry created at index: 2844439
Signature written to file message.txt.sig
Certificate written to file message.txt.crt

The command will open up the default browser, and we will have the choice to select one of the 3 following OIDC providers.

oidc providers

This will also create message.txt.crt & message.txt.sig files in the same directory.

We can use the openssl command to see the contents of the certificate file.

$ openssl x509 -in message.txt.crt -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            3a:c4:2d:19:20:f0:bf:85:37:a6:01:0f:49:d1:b6:39:20:06:fd:77
        Signature Algorithm: ecdsa-with-SHA384
        Issuer: O = sigstore.dev, CN = sigstore-intermediate
        Validity
            Not Before: Jul  5 14:45:23 2022 GMT
            Not After : Jul  5 14:55:23 2022 GMT
        Subject: 
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (384 bit)
                pub:
                    04:12:aa:88:fd:c7:1f:9e:62:78:46:2a:48:63:d3:
                    b6:92:8b:51:a4:eb:59:18:fb:18:a0:13:54:ac:d0:
                    a4:d8:20:ab:a3:f3:5e:f5:86:aa:34:9b:30:db:59:
                    1b:5c:3d:29:b1:5a:40:ff:55:2c:26:fc:42:58:95:
                    53:d6:23:e5:66:90:3c:32:8c:82:b7:fc:fd:f8:28:
                    2b:53:2d:5c:cb:df:2f:17:d0:f3:bc:26:d2:42:3d:
                    c0:b1:55:61:50:ff:18
                ASN1 OID: secp384r1
                NIST CURVE: P-384
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature
            X509v3 Extended Key Usage: 
                Code Signing
            X509v3 Subject Key Identifier: 
                6C:F0:C0:63:B8:3D:BB:08:90:C3:03:45:FF:55:92:43:7D:47:19:38
            X509v3 Authority Key Identifier: 
                DF:D3:E9:CF:56:24:11:96:F9:A8:D8:E9:28:55:A2:C6:2E:18:64:3F
            X509v3 Subject Alternative Name: critical
                email:mail@kushaldas.in
            1.3.6.1.4.1.57264.1.1: 
                https://github.com/login/oauth
            CT Precertificate SCTs: 
                Signed Certificate Timestamp:
                    Version   : v1 (0x0)
                    Log ID    : 08:60:92:F0:28:52:FF:68:45:D1:D1:6B:27:84:9C:45:
                                67:18:AC:16:3D:C3:38:D2:6D:E6:BC:22:06:36:6F:72
                    Timestamp : Jul  5 14:45:23.112 2022 GMT
                    Extensions: none
                    Signature : ecdsa-with-SHA256
                                30:46:02:21:00:AB:A6:ED:59:3E:B7:C4:79:11:6A:92:
                                29:92:BF:54:45:6A:B6:1F:6F:1C:63:7C:D9:89:26:D4:
                                6B:EF:E3:3E:9F:02:21:00:AD:87:A7:BA:BA:7C:61:D2:
                                53:34:E0:D0:C4:BF:6A:6E:28:B4:02:82:AA:F8:FD:0B:
                                FB:3A:CD:B9:33:3D:F4:36
    Signature Algorithm: ecdsa-with-SHA384
    Signature Value:
        30:65:02:30:17:89:76:ef:a1:0e:97:5b:a3:fe:c0:34:13:36:
        3f:6f:2a:ba:e9:cd:bd:f2:74:d9:8c:13:2a:88:c9:96:b2:72:
        de:34:44:95:41:f8:b0:69:5b:f0:86:a7:05:cf:81:7f:02:31:
        00:d8:3a:12:89:39:4b:2c:ad:ff:5a:23:85:d9:c0:73:f0:b1:
        db:5c:65:f9:5d:ee:7a:bb:b8:08:01:44:7a:2e:9f:ba:2b:4b:
        df:6a:93:08:e9:44:2c:23:88:66:2c:f7:8f

Verifying the signature

We can verify the signature, just make sure that the certificate & signature files are in the same directory.

$ python -m sigstore verify message.txt 
OK: message.txt

Now, to test this with some real software releases, we will download the cosign RPM package and related certificate & signature files. The certificate in this case, is base64 encoded, so we decode that file first.

$ curl -sOL https://github.com/sigstore/cosign/releases/download/v1.9.0/cosign-1.9.0.x86_64.rpm
$ curl -sOL https://github.com/sigstore/cosign/releases/download/v1.9.0/cosign-1.9.0.x86_64.rpm-keyless.sig
$ curl -sOL https://github.com/sigstore/cosign/releases/download/v1.9.0/cosign-1.9.0.x86_64.rpm-keyless.pem
$ base64 -d cosign-1.9.0.x86_64.rpm-keyless.pem > cosign-1.9.0.x86_64.rpm.pem

Now let us verify the downloaded RPM package along with the email address and signing OIDC issuer URL. We are also printing the debug statements, so that we can see what is actually happening for verification.

$ SIGSTORE_LOGLEVEL=debug python -m sigstore verify --certificate cosign-1.9.0.x86_64.rpm.pem --signature cosign-1.9.0.x86_64.rpm-keyless.sig --cert-email keyless@projectsigstore.iam.gserviceaccount.com --cert-oidc-issuer https://accounts.google.com  cosign-1.9.0.x86_64.rpm

DEBUG:sigstore._cli:parsed arguments Namespace(subcommand='verify', certificate=PosixPath('cosign-1.9.0.x86_64.rpm.pem'), signature=PosixPath('cosign-1.9.0.x86_64.rpm-keyless.sig'), cert_email='keyless@projectsigstore.iam.gserviceaccount.com', cert_oidc_issuer='https://accounts.google.com', rekor_url='https://rekor.sigstore.dev', staging=False, files=[PosixPath('cosign-1.9.0.x86_64.rpm')])
DEBUG:sigstore._cli:Using certificate from: cosign-1.9.0.x86_64.rpm.pem
DEBUG:sigstore._cli:Using signature from: cosign-1.9.0.x86_64.rpm-keyless.sig
DEBUG:sigstore._cli:Verifying contents from: cosign-1.9.0.x86_64.rpm
DEBUG:sigstore._verify:Successfully verified signing certificate validity...
DEBUG:sigstore._verify:Successfully verified signature...
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): rekor.sigstore.dev:443
DEBUG:urllib3.connectionpool:https://rekor.sigstore.dev:443 "POST /api/v1/index/retrieve/ HTTP/1.1" 200 69
DEBUG:urllib3.connectionpool:https://rekor.sigstore.dev:443 "GET /api/v1/log/entries/9ee91f2c5444e4ff77a3a18885f46fa2b6f7e629450904d67b5920333327b90d HTTP/1.1" 200 None
DEBUG:sigstore._verify:Successfully verified Rekor entry...
OK: cosign-1.9.0.x86_64.rpm

Oh, one more important thing. The maintainers of the tool are amazing about feedback. I had some trouble initially (a few weeks ago). They sat down with me to make sure that they could understand the problem & also solved the issue I had. You can talk to the team (and other users, including me) in the slack room.

July 05, 2022 03:41 PM

Python for Everyone, yes it is

I moved to Sweden a few months back. I met a lot of women like me who came from different countries. Many of them had flourishing careers in their respective countries. Teacher, architect, economist, marketing professional, HR, lawyer, you name it. But now, in this country, they cannot find a living for themselves. The reason being many but majorly the language. The majority of them do not have fluency in Swedish. Moving countries/ continents is complex. And what makes it more challenging than when you lose your individuality, financial independence, and career. I could feel their pain and more so since I am going through a somewhat similar kind of situation as them.

The only difference between them is that I had project Sigsum to support me (for my job
), PyLadies Stockholm and the greater Python Sweden community. So I thought, why not share some of my fortunes with them. Many of them wanted to build their career in tech. And we PyLadies may be a help to them. I shared my idea with our excellent PyLadies Stockholm organizer and Python Sweden Board chair, Christine. She gave me the green signal and the mantra “go do it” :). I started reaching out to companies and communities for spaces. Christine shared me with a few contacts. But it was so hard to get a place :(. Since it is midsummer, none of the offices was working as it is supposed to function. Finally, Sunet came to the rescue and provided the space for the meetup.

signal-2022-06-28-125916_003

signal-2022-06-28-125916_004

June twenty-first, 2022, was the date for our meetup. We had women coming from different educational and cultural backgrounds. One thing we all had in common was that we wanted to learn a new skill to give life a fresh start and do it together. We know it is an uphill battle, and we are ready for the struggle. After a little nudge from me, the group, which was hesitant to give their introduction initially, started sharing their projects and ideas with each other. The “Show and Tell” was a great success (and the 50 minutes session was extended to 1 hour and 30 minutes). How lovely is that? People shared such a variety of projects with us. Ranging from different Python libraries, web design projects, projects on the Password manager, and why what excites them about Python and what they want to achieve. After that, I talked about the program “Python for everyone” - > what is it? And why is it? And whom is it for? It was an up, close and personal session.

signal-2022-06-28-125916_002

signal-2022-06-28-125916_005

We are going to have another mingle in the first half of August. And our Python for Everyone sessions will begin with the Python 101 session on August second. In the meantime, we will focus on building the foundation, so we are ready for our sessions in August. Join us on our slack channel and stay connected.

signal-2022-06-28-125916_001

pyladies_anwesha

by Anwesha Das at June 28, 2022 11:06 AM

Kubernetes 1.25 Enhancements Role Lead! #33

June 14, 2022

Extending on my earlier post about the Kubernetes Release Team


I’m serving as the Enhancements Role Lead for the current Kubernetes 1.25 Release Team.

As a role lead this time, I have a group of five outstanding shadows that I am not only mentoring to become future leads, but I am also learning from them - both “how to teach” and “how to learn”

I haven’t posted in a long time (ups & downs & new roles & responsibilities & sometimes you don’t feel like doing anything at all & it literally takes all the energy even to do the required)

So, just adding that I was also an Enhancements Shadow on the Kubernetes 1.24 Release Team, and my former role lead, Grace Ngyuen, nominated me to be the next role lead at the conclusion of the previous release cycle.

Screenshot 2022-06-14 at 6 12 05 PM

When I look back on my time throughout these three cycles, I’m amazed at how much I’ve learned. It’s been a great experience. 🙂 Not only did I learn, but I also felt recognized.

Currently, We’re at Week 4 of the 1.25 release cycle, and it’s one of the busiest for the Enhancements role (as we’re almost approaching Enhancements Freeze in a week). I would say, we’re doing good so far! 😄


And one more thing before I finish up this small post!

I got to go to my first ever KubeCon event in person!

I had the opportunity to attend the KubeCon EU 2022 event in Valencia, Spain (my first ever international travel as well). I was astonished that so many people knew who I was (anything more than zero was “so many” for me) and that I already belonged to a tiny group of people. It was an incredible feeling.

I’m not a very photo photo person, but sharing some 🙂

52093810604_21284f81cc_o

Image from iOS (5)

52093788719_4489628efa_o

June 14, 2022 12:00 AM

Progressive Enhancement is not anti-JavaScript

Yesterday, I came across a tweet by Sara Soueidan, which resonated with me. Mostly because I have had this discussion (or heated arguments) quite a few times with many folks. Please go and read her tweet thread since she mentions some really great points about why progressive enhancement is not anti-js. As someone who cares about security, privacy, and accessibility, I have always been an advocate of progressive enhancement. I always believe that a website (or any web-based solution) should be accessible even without JavaScript in the browser. And more often than not, people take me as someone who is anti-JavaScript. Well, let me explain with the help (a lot of help) of resources already created by other brilliant folks.

What is Progressive Enhancement?

Progressive enhancement is the idea of making a very simple, baseline foundation for a website that is accessible and usable by all users irrespective of their input/output devices, browsers (or user-agents), or the technology they are using. Then, once you have done that, you sprinkle more fancy animations and custom UI on top that might make it look more beautiful for users with the ideal devices.

I know I probably didn't do a perfect job explaining the idea of progressive enhancement. So honestly, just go and watch this video on progressive enhancement by Heydon Pickering.

So how to do this Progressive Enhancement?

If you saw the video by Heydon, I am sure you are starting to get some idea. Here I am going to reference another video titled Visual Styling vs. Semantic Meaning, which was created by Manuel Matuzović. I love how, in this video Manuel shares the idea of building first semantically and then visually styling it.

So I think a good way to do progressive enhancement according to me is:

  1. Start with HTML - This is a very good place to start, because not only does this ensure that almost all browsers and user devices can render this, but also it helps you think semantically instead of based on the visual design. That already starts making your website not only good for different browsers, but also for screen reader and assistive technology users.

  2. Add basic layout CSS progressively - This is the step where you start applying visual designs. But only the basic layouts. This progressively enhances the visual look of the website, and also you can add things like better focus styles, etc. Be careful and check caniuse.com to add CSS features that are well supported across most browsers in different versions. Remember what Heydon said? "A basic Layout is not a broken layout".

  3. Add fancy CSS progressively - Add more recent CSS features for layouting and to progressively enhance the visual styling of your website. Here you can add much more newer features that make the design look even more perfect.

  4. Add fancy JavaScript sparkles progressively - If there are animations, and interactions that you would like the user to have that is not possible by HTML & CSS, then start adding your JavaScript at this stage. JavaScript is often necessary for creating accessible custom UIs. So absolutely use when necessary to progressively enhance the experience of your users based on the user-agents they have.

SEE! I told you to add JavaScript! So no, progressive enhancement is not about being anti-JavaScript. It's about progressively adding JavaScript wherever necessary to enhance the features of the website, without blocking the basic content, layout and interactions for non-JavaScript users.

Well, why should I not write everything in JavaScript?

I know it's trendy these days to learn fancy new JavaScript frameworks and write fancy new interactive websites. So many of you at this point must be like, "Why won't we write everything in JavaScript? Maybe you hate JavaScript, that's why you are talking about these random HTML & CSS things. What are those? Is HTML even a programming language?"

Well firstly, I love JavaScript. I have contributed to many JavaScript projects, including jQuery. So no I don't hate JavaScript. But I love to use JavaScript for what JavaScript is supposed to be used for. And in most cases, layouting or loading basic content isn't one of them.

But who are these people who need websites to work without JavaScript?

  • People who have devices with only older browsers. Remember, buying a new device isn't so easy in every part of the world and sometimes some devices may have user-agents that don't support fancy JavaScript. But they still have the right to read the content of the website.
  • People who care about their security and privacy. A lot of security and privacy focused people prefer using a browser like Tor Browser with JavaScript disabled to avoid any kind of malicious JavaScript or JavaScript based tracking. Some users even use extensions like NoScript with common browsers (firefox, chrome, etc.) for similar reasons. But just because they care about their security and privacy doesn't mean they shouldn't have access to wesite content.
  • People with not so great internet. Many parts of the world still don't have access to great internet and rely on 2G connections. Often loading a huge bundled JavaScript framework with all it's sparkles and features takes unrealistically long time. But they should still be able to access content from a website article.

So, yes. It's not about not using JavaScript. It's more about starting without JavaScript, and then adding your bells and whistles with JavaScript. That way people who don't use JavaScript can still access atleast the basic content.

See this amazing example of progressive enhancement using JavaScript by Adrian Roselli: https://twitter.com/aardrian/status/1527735474592284672

Here is another really great talk by Max Böck in id24: https://www.youtube.com/watch?v=8RdrRCq8VzU

May 20, 2022 08:48 PM

Django: How to acquire a lock on the database rows?

select_for_update is the answer if you want to acquire a lock on the row. The lock is only released after the transaction is completed. This is similar to the Select for update statement in the SQL query.

>>> Dealership.objects.select_for_update().get(pk='iamid')
>>> # Here lock is only required on Dealership object
>>> Dealership.objects.select_related('oem').select_for_update(of=('self',))

select_for_update have these four arguments with these default value – nowait=False – skiplocked=False – of=() – nokey=False

Let's see what these all arguments mean

nowait

Think of the scenario where the lock is already acquired by another query, in this case, you want your query to wait or raise an error, This behavior can be controlled by nowait, If nowait=True we will raise the DatabaseError otherwise it will wait for the lock to be released.

skip_locked

As somewhat name implies, it helps to decide whether to consider a locked row in the evaluated query. If the skip_locked=true locked rows will not be considered.

nowait and skip_locked are mutually exclusive using both together will raise ValueError

of

In select_for_update when the query is evaluated, the lock is also acquired on the select related rows as in the query. If one doesn't wish the same, they can use of where they can specify fields to acquire a lock on

>>> Dealership.objects.select_related('oem').select_for_update(of=('self',))
# Just be sure we don't have any nullable relation with OEM

no_key

This helps you to create a weak lock. This means the other query can create new rows which refer to the locked rows (any reference relationship).

Few more important points to keep in mind select_for_update doesn't allow nullable relations, you have to explicitly exclude these nullable conditions. In auto-commit mode, select_for_update fails with error TransactionManagementError you have to add code in a transaction explicitly. I have struggled around these points :).

Here is all about select_for_update which you require to know to use in your code and to do changes to your database.

Cheers!

#Python #Django #ORM #Database

May 14, 2022 02:06 PM

There is a lot more to autocomplete than you think

Anyone who has dealt with <form> tag in HTML might have come across the autocomplete attribute. Most developers just put autocomplete="on" or autocomplete="off" based on whether they want users to be able to autocomplete the form fields or not. But there's much more in the autocomplete attribute than many folks may know.

Browser settings

Most widely used browsers (Firefox, Chrome, Safari, etc.), by default, remember information that is submitted using a form. When the user later tries to fill another form, browsers look at the name or type attribute of the form field, and then offer to autocomplete or autofill based on the saved information from previous form submissions. I am assuming many of you might have experienced these autocompletion suggestions while filling up forms. Some browsers, like Firefox, look at the id attribute and sometimes even the value of the <label> associated with the input field.

Autofill detail tokens

For a long time, the only valid values for the autocomplete attribute were "on" or "off" based on whether the website developer wanted to allow the browser to automatically complete the input. However, in the case of "on", it was left entirely to the browser how they determine which value is expected by the input form. Now, for some time, the autocomplete tag allows for some other values, which are collectively as a group called autofill detail tokens.

<div>
  <label for="organization">Enter your credit card number</label>
  <input name="organization" id="organization" autocomplete="organization">
</div>

These values help tell the browser exactly what the input field expects without needing the browser to guess it. There is a big list of autofill detail tokens. Some of the common ones are "name", "email", "username", "organizations", "country", "cc-number", and so on. Check the WHATWG Standard for autofill detail tokens to understand what are the valid values, and how they are determined.

There are two different autofill detail tokens associated with passwords which have some interesting features apart from the autocompletion:

  • "new-password" - This is supposed to be used for "new password field"s or for "confirm new password field"s. This helps separate a current password field from a new password field. Most browsers and most password managers, when they see this in autocomplete attribute, will avoid accidentaly filling existing passwords. Some even suggest a new randomly generated password for the field if autocomplete has "new-password" value.
  • "current-password" - This is used by browsers and password managers to autofill or suggest autocompletion with the current saved password for that email/username for that website.

The above two tokens really help in intentionally separating new password fields from login password fields. Otherwise, the browsers and password managers don't have much to separate between the two different fields and may guess it wrong.

Privacy concerns

Now, all of the above points might already be giving privacy and security nightmares to many of you. Firstly, the above scenario works only if you are on the same computer, using the same accounts, and the same browsers. But there are a few things you can do to avoid autocompletion, or saving of the data when filling up the form.

  • Use the browser in privacy/incognito mode. Most browsers will not save the form data submitted to a website when opened in incognito mode. They will, however still suggest autocompletion, based on the saved information from normal mode.
  • If you already have autocomplete information saved from before, but want to remove now, you can. Most browsers allow you to clear form and search history from the browser.
  • If you want to disable autofill and autocomplete, you can do that as well from browser settings. This will also tell the browsers to never remember the values entered into the form fields.

You can find the related information for different browsers here:

Now, if you are a privacy-focused developer like me, you might be wondering, "Can't I as a developer help protect privacy?". Yes, we can! That's exactly what autocomplete="off" is still there for. We can still add that attribute to an entire <form> which will disable both remembering and autocompletion of all form data in that form. We can also add autocomplete="off" individually to specific <input>, <textarea>, <select> to disabled the remembering and autocompletion of specific fields instead of the entire form.

PS: Even with autocomplete="off", most browsers still offer to remember username and password. This is actually done for the same reason digital security trainers ask to use password managers: so that users don't use the same simple passwords everywhere because they have to remember. As a digital security trainer, I would still recommend not using your browser's save password feature, and instead using a password manager. The password managers actually follow the same rule of remembering and auto-filling even with autocomplete="off" for username and password fields.

Accessibility

So, as a privacy-focused developer, you might be wondering, "Well, I should just use autocomplete="off" in every <form> I write from today". Well, that raises some huge accessibility concerns. If you love standards, then look specifically at Understanding Success Criterion 1.3.5: Identify Input Purpose.

There are folks with different disabilities who really benefit from the autocomplete tag, which makes it super important for accessibility:

  • People with disabilities related to memory, language or decision-making benefit immensely from the auto-filling of data and not need to remember every time to fill up a form
  • People with disabilities who prefer images/icons for communication can use assistive technology to add icons associated with various different input fields. A lot of them can benefit from proper autocomplete values, if the name attribute is not eligible.
  • People with motor disabilities benefit from not needing to manually input forms every time.

So, given that almost all browsers have settings to disable these features, it might be okay to not always use autocomplete="off". But, if there are fields that are essentially super sensitive, that you would never want the browser to save information about (e.g., government id, one time pin, credit card security code, etc.), you should use autocomplete="off" on the individual fields instead of the entire <form>. Even if you really really think that the entire form is super sensitive and you need to apply autocomplete="off" on the entire <form> element to protect your user's privacy, you should still at least use autofill detail tokens for the individual fields. This will ensure that the browser doesn't remember the data entered or suggest autofills, but will still help assistive technologies to programmatically determine the purpose of the fields.

Recommended further readings:

May 08, 2022 10:30 AM

Network operations

what is network?

A network is a group of computers and computing devices connected together through communication channels, such as cables or wireless media. The computers connected over a network may be located in the same geographical area or spread across the world. The Internet is the largest network in the world and can be called "the network of networks".

ip address

Devices attached to a network must have at least one unique network address identifier known as the IP (Internet Protocol) address. The address is essential for routing packets of information through the network. Exchanging information across the network requires using streams of small packets, each of which contains a piece of the information going from one machine to another. These packets contain data buffers, together with headers which contain information about where the packet is going to and coming from, and where it fits in the sequence of packets that constitute the stream. Networking protocols and software are rather complicated due to the diversity of machines and operating systems they must deal with, as well as the fact that even very old standards must be supported.

IPV4 and IPV6

There are two different types of IP addresses available: IPv4 (version 4) and IPv6 (version 6). IPv4 is older and by far the more widely used, while IPv6 is newer and is designed to get past limitations inherent in the older standard and furnish many more possible addresses.

IPv4 uses 32-bits for addresses; there are only 4.3 billion unique addresses available. Furthermore, many addresses are allotted and reserved, but not actually used. IPv4 is considered inadequate for meeting future needs because the number of devices available on the global network has increased enormously in recent years.

IPv6 uses 128-bits for addresses; this allows for 3.4 X 1038 unique addresses. If you have a larger network of computers and want to add more, you may want to move to IPv6, because it provides more unique addresses. However, it can be complex to migrate to IPv6; the two protocols do not always inter-operate well. Thus, moving equipment and addresses to IPv6 requires significant effort and has not been quite as fast as was originally intended. We will discuss IPv4 more than IPv6 as you are more likely to deal with it.

One reason IPv4 has not disappeared is there are ways to effectively make many more addresses available by methods such as NAT (Network Address Translation). NAT enables sharing one IP address among many locally connected computers, each of which has a unique address only seen on the local network. While this is used in organizational settings, it also used in simple home networks. For example, if you have a router hooked up to your Internet Provider (such as a cable system) it gives you one externally visible address, but issues each device in your home an individual local address.

decoding IPv4

A 32-bit IPv4 address is divided into four 8-bit sections called octets.

Example: IP address → 172 . 16 . 31 . 46 Bit format → 10101100.00010000.00011111.00101110

NOTE: Octet is just another word for byte.

Network addresses are divided into five classes: A, B, C, D and E. Classes A, B and C are classified into two parts: Network addresses (Net ID) and Host address (Host ID). The Net ID is used to identify the network, while the Host ID is used to identify a host in the network. Class D is used for special multicast applications (information is broadcast to multiple computers simultaneously) and Class E is reserved for future use.

  • Class A network address – Class A addresses use the first octet of an IP address as their Net ID and use the other three octets as the Host ID. The first bit of the first octet is always set to zero. So you can use only 7-bits for unique network numbers. As a result, there are a maximum of 126 Class A networks available (the addresses 0000000 and 1111111 are reserved). Not surprisingly, this was only feasible when there were very few unique networks with large numbers of hosts. As the use of the Internet expanded, Classes B and C were added in order to accommodate the growing demand for independent networks. Each Class A network can have up to 16.7 million unique hosts on its network. The range of host address is from 1.0.0.0 to 127.255.255.255.

  • Class b network address – Class B addresses use the first two octets of the IP address as their Net ID and the last two octets as the Host ID. The first two bits of the first octet are always set to binary 10, so there are a maximum of 16,384 (14-bits) Class B networks. The first octet of a Class B address has values from 128 to 191. The introduction of Class B networks expanded the number of networks but it soon became clear that a further level would be needed. Each Class B network can support a maximum of 65,536 unique hosts on its network. The range of host addresses is from 128.0.0.0 to 191.255.255.255.

  • Class C network address – Class C addresses use the first three octets of the IP address as their Net ID and the last octet as their Host ID. The first three bits of the first octet are set to binary 110, so almost 2.1 million (21-bits) Class C networks are available. The first octet of a Class C address has values from 192 to 223. These are most common for smaller networks which don’t have many unique hosts. Each Class C network can support up to 256 (8-bits) unique hosts. The range of host addresses is from 192.0.0.0 to 223.255.255.255.

what is a name resolution?

Name Resolution is used to convert numerical IP address values into a human-readable format known as the hostname. For example, 104.95.85.15 is the numerical IP address that refers to the hostname whitehouse.gov. Hostnames are much easier to remember!

Given an IP address, one can obtain its corresponding hostname. Accessing the machine over the network becomes easier when one can type the hostname instead of the IP address.

Then comes the network configuration files and these are essential to ensure that interfaces function correctly. They are located in the /etc directory tree. However, the exact files used have historically been dependent on the particular Linux distribution and version being used.

For Debian family configurations, the basic network configuration files could be found under /etc/network/, while for Red Hat and SUSE family systems one needed to inspect /etc/sysconfig/network.

Network interfaces are a connection channel between a device and a network. Physically, network interfaces can proceed through a network interface card (NIC), or can be more abstractly implemented as software. You can have multiple network interfaces operating at once. Specific interfaces can be brought up (activated) or brought down (de-activated) at any time.

A network requires the connection of many nodes. Data moves from source to destination by passing through a series of routers and potentially across multiple networks. Servers maintain routing tables containing the addresses of each node in the network. The IP routing protocols enable routers to build up a forwarding table that correlates final destinations with the next hop addresses.

Let’s learn about more networking tools like wget and curl. Sometimes, you need to download files and information, but a browser is not the best choice, either because you want to download multiple files and/or directories, or you want to perform the action from a command line or a script. wget is a command line utility that can capably handle these kinds of downloads. Whereas curl is used to obtain information about any specific URL.

File Transfer Protocol(FTP)

File Transfer Protocol (FTP) is a well-known and popular method for transferring files between computers using the Internet. This method is built on a client-server model. FTP can be used within a browser or with stand-alone client programs. FTP is one of the oldest methods of network data transfer, dating back to the early 1970s.

Secure Shell(SSH)

Secure Shell (SSH) is a cryptographic network protocol used for secure data communication. It is also used for remote services and other secure services between two devices on the network and is very useful for administering systems which are not easily available to physically work on, but to which you have remote access.

by climoiselle at April 12, 2022 02:22 PM

Manipulating text in Linux!

There are few command line tools which we can use to parse text files. This helps us in day to day basis if you are using linux and it becomes essential for a Linux user to adept at performing certain operations on file.

cat

cat is short for concatenate and is one of the most frequently used Linux command line utilities. It is often used to read and print files, as well as for simply viewing file contents. To view a file, use the following command:

$ cat 

For example, cat readme.txt will display the contents of readme.txt on the terminal. However, the main purpose of cat is often to combine (concatenate) multiple files together. The tac command (cat spelled backwards) prints the lines of a file in reverse order. Each line remains the same, but the order of lines is inverted. cat can be used to read from standard input (such as the terminal window) if no files are specified. You can use the > operator to create and add lines into a new file, and the >> operator to append lines (or files) to an existing file. We mentioned this when talking about how to create files without an editor.

To create a new file, at the command prompt type cat > and press the Enter key. This command creates a new file and waits for the user to edit/enter the text, after editing type in CTRL-D at the beginning of the next line to save and exit the editing.

echo

echo simply displays (echoes) text. It is used simply, as in:

$ echo string

echo can be used to display a string on standard output (i.e. the terminal) or to place in a new file (using the > operator) or append to an already existing file (using the >> operator).

The –e option, along with the following switches, is used to enable special character sequences, such as the newline character or horizontal tab:

  • \n represents newline
  • \t represents horizontal tab.

echo is particularly useful for viewing the values of environment variables (built-in shell variables). For example, echo $USERNAME will print the name of the user who has logged into the current terminal.

how to work with large files?

System administrators need to work with configuration files, text files, documentation files, and log files. Some of these files may be large or become quite large as they accumulate data with time. These files will require both viewing and administrative updating.

For example, a banking system might maintain one simple large log file to record details of all of one day’s ATM transactions. Due to a security attack or a malfunction, the administrator might be forced to check for some data by navigating within the file. In such cases, directly opening the file in an editor will cause issues, due to high memory utilization, as an editor will usually try to read the whole file into memory first. However, one can use less to view the contents of such a large file, scrolling up and down page by page, without the system having to place the entire file in memory before starting. This is much faster than using a text editor.

head reads the first few lines of each named file (10 by default) and displays it on standard output. You can give a different number of lines in an option.

For example, if you want to print the first 5 lines from /etc/default/grub, use the following command:

$ head –n 5 /etc/default/grub

tail prints the last few lines of each named file and displays it on standard output. By default, it displays the last 10 lines. You can give a different number of lines as an option. tail is especially useful when you are troubleshooting any issue using log files, as one can probably want to see the most recent lines of output. For example, to display the last 15 lines of somefile.log, use the following command:

$ tail -n 15 somefile.log

to view compressed files

When working with compressed files, many standard commands cannot be used directly. For many commonly-used file and text manipulation programs, there is also a version especially designed to work directly with compressed files. These associated utilities have the letter "z" prefixed to their name. For example, we have utility programs such as zcat, zless, zdiff and zgrep.

managing your files

Linux provides numerous file manipulation utilities that you can use while working with text files.

  • sort – is used to rearrange the lines of a text file, in either ascending or descending order according to a sort key. The default sort key is the order of the ASCII characters (i.e. essentially alphabetically).
  • uniq – removes duplicate consecutive lines in a text file and is useful for simplifying the text display.
  • paste – can be used to create a single file containing all three columns. The different columns are identified based on delimiters (spacing used to separate two fields). For example, delimiters can be a blank space, a tab, or an Enter.
  • split – is used to break up (or split) a file into equal-sized segments for easier viewing and manipulation, and is generally used only on relatively large files. By default, split breaks up a file into 1000-line segments. The original file remains unchanged, and a set of new files with the same name plus an added prefix is created. By default, the x prefix is added. To split a file into segments, use the command split infile.

Regular expressions are text strings used for matching a specific pattern, or to search for a specific location, such as the start or end of a line or a word. Regular expressions can contain both normal characters or so-called meta-characters, such as * and $.

grep is extensively used as a primary text searching tool. It scans files for specified patterns and can be used with regular expressions, as well as simple strings

by climoiselle at April 04, 2022 03:51 PM

File permissions!

In Linux and other UNIX-based operating systems, every file is associated with a user who is the owner. Every file is also associated with a group (a subset of all users) which has an interest in the file and certain rights, or permissions: read, write, and execute.

  • chown – Used to change user ownership of a file or directory
  • chgrp – Used to change group ownership
  • chmod – Used to change the permissions on the file, which can be done separately for owner, group and the rest of the world (often named as other)

Files have three kinds of permissions: read (r), write (w), execute (x). These are generally represented as in rwx. These permissions affect three groups of owners: user/owner (u), group (g), and others (o).

As a result, you have the following three groups of three permissions:

rwx: rwx: rwx u: g: o

There are a number of different ways to use chmod.u stands for user (owner), o stands for other (world), and g stands for group.

This kind of syntax can be difficult to type and remember, so one often uses a shorthand which lets you set all the permissions in one step. This is done with a simple algorithm, and a single digit suffices to specify all three permission bits for each entity. This digit is the sum of:

  1. 4 if read permission is desired
  2. 2 if write permission is desired
  3. 1 if execute permission is desired.

Thus, 7 means read/write/execute, 6 means read/write, and 5 means read/execute.

by climoiselle at March 24, 2022 11:34 AM

Text editors of Linux

Proceeding ahead with the different text editors found in linux. Sometimes we may need to manually edit text files and we can do this by using a text editor instead of graphical utilities for creating and modifying system configuration files. Linux is packed with choices; when it comes to text editors, there are many choices, ranging from quite simple to very complex, including:

  • nano
  • gedit
  • vi
  • emacs

One can create files without opening the full text editors by typing on some commands. If you want to create a file without using an editor, there are two standard ways to create one from the command line and fill it with content.

The first is to use echo repeatedly:

$ echo line one &gt; myfile
$ echo line two &gt;&gt; myfile
$ echo line three &gt;&gt; myfile

Note that while a single greater-than sign (>) will send the output of a command to a file, two of them (>>) will append the new output to an existing file.

The second way is to use cat combined with redirection:

$ cat &lt; myfile
&gt; line one
&gt; line two
&gt; line three
&gt; EOF
$

Both techniques produce a file with the following lines in it.

Typing in echo and cat

nano and gedit

There are some text editors that are pretty obvious; they require no particular experience to learn and are actually quite capable, even robust. A particularly easy to use one is the text terminal-based editor nano. Just invoke nano by giving a file name as an argument. As a graphical editor, gedit is part of the GNOME desktop system (kwrite is associated with KDE). The gedit and kwrite editors are very easy to use and are extremely capable. They are also very configurable. They look a lot like Notepad in Windows. Other variants such as kate are also supported by KDE.

nano is easy to use, and requires very little effort to learn. To open a file, type nano and press Enter. If the file does not exist, it will be created.

nano provides a two line shortcut bar at the bottom of the screen that lists the available commands. Some of these commands are:

  • CTRL-G – Display the help screen.
  • CTRL-O – Write to a file.
  • CTRL-X – Exit a file.
  • CTRL-R – Insert contents from another file to the current buffer.
  • CTRL-C – Show cursor position.

gedit (pronounced ‘g-edit’) is a simple-to-use graphical editor that can only be run within a Graphical Desktop environment. It is visually quite similar to the Notepad text editor in Windows, but is actually far more capable and very configurable and has a wealth of plugins available to extend its capabilities further.

To open a new file find the program in your desktop’s menu system, or from the command line type gedit . If the file does not exist, it will be created.

Using gedit is pretty straightforward and does not require much training. Its interface is composed of quite familiar elements.

vi and emacs

Both vi and emacs have a basic purely text-based form that can run in a non-graphical environment. They also have one or more graphical interface forms with extended capabilities; these may be friendlier for a less experienced user. While vi and emacs can have significantly steep learning curves for new users, they are extremely efficient when one has learned how to use them.

Intro to vi

Usually, the actual program installed on your system is vim, which stands for Vi IMproved, and is aliased to the name vi. The name is pronounced as “vee-eye”.

Even if you do not want to use vi, it is good to gain some familiarity with it: it is a standard tool installed on virtually all Linux distributions. Indeed, there may be times where there is no other editor available on the system.

GNOME extends vi with a very graphical interface known as gvim and KDE offers kvim. Either of these may be easier to use at first.

Typing sh command opens an external command shell. When you exit the shell, you will resume your editing session.

Typing ! executes a command from within vi. The command follows the exclamation point. This technique is best suited for non-interactive commands, such as : ! wc %. Typing this will run the wc (word count) command on the file; the character % represents the file currently being edited.

emacs

The emacs editor is a popular competitor for vi. Unlike vi, it does not work with modes. emacs is highly customizable and includes a large number of features. It was initially designed for use on a console, but was soon adapted to work with a GUI as well. emacs has many other capabilities other than simple text editing. For example, it can be used for email, debugging, etc.

by climoiselle at March 24, 2022 09:52 AM

RepRap 3D printer revision 2

Previously, I wrote about the first revision of our RepRap machine based on Prusa i3 printer. This is a project which I have been working with my younger brother. I will be talking about the enhancements, issues, and learnings from the second build of the printer. 3D printed printer parts As soon as we got the first build of the printer working, we started printing printer parts. Basically, the idea is to replace the wooden parts with 3D printed parts which have way better precision.
by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at March 20, 2022 01:41 PM

Subscriptions

Planetorium