Planet DGPLUG

Feed aggregator for the DGPLUG community

Aggregated articles from feeds

Understanding massive ZeroDay impacting Dogecoin and 280+ networks including Litecoin and Zcash

Halborn discovered a massive #ZeroDay vulnerability code-named Rab13s impacting Dogecoin and 280+ networks, including Litecoin and Zcash, putting over $25 Billion of digital assets at risk.

To understand the zero-day vulnerability Rab13s, we need to go through some basic concepts. So I would like to explain blockchain and its key characteristics.

Blockchain is a data structure used to represent a cryptocurrency. Stores data in a way that allows multiple parties to access it reliably without having to trust one another.

The key characteristics of blockchain are:- 

Decentralized control: Communal consensus, rather than one party’s decision, dictates who gets to access or update the blockchain.

Tamper-evidence: It’s immediately obvious if data stored on the blockchain has been tampered with.

Nakamoto consensus: One has to provably spend resources when updating the blockchain.

Now as we know the basics of blockchain, we can learn about how transactions are added to the blockchain.

As the new transactions happen on the blockchain, they are bundled into “blocks, ” which are added to the blockchain with backlinks to enforce the order. Then the Data is stored by, and updates are broadcasted to, everyone

This images illustrate a series of transactions from various sources, each represented by a unique cartoon character. 

Once a transaction is initiated, it is added to a queue to be processed and added to a block. On the left side of the image, multiple blocks are shown in a line, with transactions being selected for inclusion based on the fees paid. Transactions with higher fees are prioritized and added to the current block, while those with lower fees may have to wait for the next block.

As the block reaches its capacity, all transactions contained within it are finalized and added to the blockchain. This process ensures that all transactions are validated and securely recorded while incentivising miners to prioritize high-value transactions. Overall, this system helps ensure the blockchain network’s integrity and efficiency.

Now that we know how transactions are added to the blockchain, you might want to know who checks and verifies the transactions before adding blocks to the blockchain.

Mining is the process that Bitcoin and several other cryptocurrencies use to generate new coins and verify new transactions. It involves vast, decentralized networks of computers around the world that verify and secure blockchains – the virtual ledgers that document cryptocurrency transactions. In return for contributing their processing power, computers on the network are rewarded with new coins. It’s a virtuous circle: the miners maintain and secure the blockchain, the blockchain awards the coins, and the coins incentivise miners to maintain the blockchain.

The high-level view of mining:- 

  1. Download the entire Bitcoin blockchain
  2. Verify incoming transactions 
  3. Create a block 
  4. Do work, here it bunch of pointless brute-force computations (Find a valid nonce) 
  5. Broadcast your block 
  6. Get the Reward 

Blockchain nodes play a critical role in maintaining the integrity of the blockchain and they are network stakeholders, and their devices are authorized to keep track of the distributed ledger and serve as communication hubs for various network tasks.

To add blocks to the blockchain, the nodes that are responsible for mining must come to a consensus. Multiple nodes are attempting to obtain the reward, so they must reach a mutual agreement before a block can be added to the blockchain.

Consensus refers to the process of reaching an agreement among participants in a network. In the context of a ledger of transactions, the consensus is required to agree on any changes made to it.

For instance, in the case where Alice promises 1 BTC to Bob in one transaction and the same 1 BTC to Carol in another, this creates a double spending attack that needs to be resolved through consensus.

An individual attempts to spend the same cryptocurrency twice in a double-spend attack. To execute this attack successfully, the attacker must control most of the network’s computing power, typically around 51%.

If Alice were to attempt a double spend attack, she would need to control the majority of the network’s computing power. Otherwise, the rest of the network would not accept her version of the blockchain.

It’s important to note that if anyone could create a node and add blocks to the network, it would make the network vulnerable to attacks from individuals seeking to manipulate the ledger for their benefit. To prevent this, the Nakamoto consensus protocol is used.

The Nakamoto consensus is a specific method used in the Bitcoin network to achieve consensus among participants. It involves requiring network participants to perform resource-intensive computations to add new blocks to the blockchain and to validate transactions.

This process of performing computationally-intensive tasks is not pointless; it serves as a way to prevent malicious actors from taking over the network and manipulating the ledger. The network can maintain security and prevent attacks by requiring participants to demonstrate that they have put in significant effort to add new blocks.  Doing so ensures that participants in the network have significant computational power and makes it more difficult for any individual to manipulate the ledger.

The image shows Alice shares her version of the blockchain in the image with the network. However, after completing their computations, Dan, Carol, and Bob reject her version of the blockchain. At the same time, other proxy nodes controlled by Alice are still performing their computations and attempting to vote.

Despite Alice’s efforts, the network ultimately rejects her version of the blockchain through the process of consensus. By majority vote, the network agrees to maintain the existing version of the blockchain, which is considered the most accurate and valid record of the network’s transactions.

So now I think you can understand the zero-day, which impacts most of the blockchains. According to Holborn, The most critical vulnerability discovered is related to peer-to-peer (p2p) communications, where attackers can craft consensus messages and send it to individual nodes, taking them offline.

Another zero-day identified by Halborn was uniquely related to #Dogecoin, including an RPC vulnerability impacting individual miners.

Subsequently, variants of these 0 days were also discovered in similar blockchain networks, potentially leading to DoS or RCE attacks.

We can understand the vulnerabilities mentioned above in terms of our alice’s example

As per Holborn, Alice now has the ability to send consensus messages that can cause the receiving nodes to shut down and disconnect from the network.

If a significant portion of the nodes in the blockchain network are offline, Alice could potentially gain control of the majority of the remaining nodes. With this level of control, Alice could attempt to launch a 51% attack on the network, which would allow her to carry out double-spending attacks and manipulate the blockchain’s transaction history.

Halborn was hired to audit dogecoin’s code, and due to the severity of the issue, they did not release technical or exploit details at the time. I attempted to read their commit history messages to understand the technical details better. However, I found the relevant details in their latest update security improvement section. This release contains multiple security-related fixes:

  • The alert system has been removed, and the processing of alert messages has been disabled
  • The transaction download system has been made more reliable
  • The protocol implementation has been amended to reject buggy or malformed messages
  • Memory management in events of high network traffic or when connected to extremely slow peers has been improved

I believe the malicious actor was utilizing the alert system to send consensus messages, allowing them to perform remote command execution on the affected node. This allowed them to shut down the node or execute any other command they desired. The protocol has been amended to reject any buggy or malformed messages to prevent this from happening again. 

Links 

https://github.com/dogecoin/dogecoin/releases/

https://github.com/dogecoin/dogecoin/blob/master/doc/release-notes.md

https://github.com/dogecoin/dogecoin/commits/master?after=3a29ba6d497cd1d0a32ecb039da0d35ea43c9c85+139&branch=master&qualified_name=refs%2Fheads%2Fmaster

https://www.litebit.eu/en/dogecoin-fixed-a-vulnerability-that-persists-in-over-280-other-networks

https://dogecoin.com/dogepedia/how-tos/operating-a-node/#:~:text=Go%20to%20Dogecoin%20Core%20%2D%3E%20Preferences,Core%20on%20system%20login%E2%80%9D%20option.

by shivam at May 23, 2023 05:41 AM

Ansible Automation Forum for Public Sector organizations

Last Friday 5th of May, the Red Hat Ansible team facilitated an Automation Forum at Arbetsförmedlingen for Swedish Public sector authorities. We had different public sectors organizations such as Municipalities, Swedish Employment Agency, and many other government organizations present that day.

everyone

Every organization has its own set of unique problems with automation. And Public sector organizations share issues typical to them. Therefore it is essential to have a platform where they can share their stories, problems, concerns, and journey. It helps to find the solution to the domain-specific problems from different viewpoints of the people who understand the issues from the core. At the forum, we, Ansible Team at Red Hat, are trying to build the community and

It was my first time representing Red Hat as a speaker at an event. So I was thrilled and beyond.

karl

The day started with Susanne Blomlöf welcoming us and describing schedule and why we planned such as event and trying to build the platform. The first speaker of the day was Karl Ling. He is a System Administrator at Arbetsförmedlingen. He shared the automation journey of Arbetsförmedlingen. He explained it from both technical and organizational perspectives. It was great to hear about their process and how they connected technology with a long-term sustainability strategy.

markus

The next speaker was Markus Närenbäck from Atea. He presented how to build a well-rounded automation strategy and how it benefits business growth. The most exciting part was when he shared some real-life scenarios explaining his context.

reflection

An interactive reflection session followed the abovementioned two talks. All the attendees were divided into four groups and asked two simple questions. Why are they here, and what does this forum mean to them? The second question was how to facilitate and build community inside their organization. And after 20 minutes, each team presented their views on those questions. It was such a fantastic session. I enjoyed this session the most.

anwesha2

I had my talk post lunch. I was giving a session on &aposBuilding Communities from Within.&apos It was how to build community within the organization. In my presentation, I shared how the community is an essential and fundamental part of knowledge sharing and growing as a team. It also reduces resource wastage and increases potential. I further explained the
idea of InnerSource and how it can help us build community within the organization. I shared some tips and tricks for making the community within the organization. The talk was well appreciated. People came to me and said how they benefited from my talk. And it made my day :).

johan

After my session, we had only one session left. It was the session by Johan on
&aposPast, present and future of Ansible automation .&aposHe explained how and why we introduced Ansible Collections to the Ansible land, what is Ansible Community, Project wisdom and Event Driven Ansible. He ended his talk with a demo on EDA, the new cool command in the Ansible land.

joakim

I want to mention one person Joakim here. He has put in so much hard work and effort to put this platform together, his vision and approach towards the event, and replying to my thousand quarries with patience :). The venue successfully provided a space for the Swedish public sector authorities to come together and be open about their automation stories.

coffeebreak

I am looking forward to more events on this platform.

by Anwesha Das at May 09, 2023 11:28 AM

Event Driven Ansible, it is

We had our 3rd Ansible Stockholm Meetup on the 3rd of May, 2023, at Sunet. It was the time to learn about Event Driven Ansible.

sthlm_ansible_may

Wikipedia says Event-driven architecture is a software architecture paradigm promoting the production, detection, consumption of, and reaction to an action or occurrence recognized by software asynchronously from the external environment; the software may handle that. In simple terms, some action is triggered by some events. Following the Event Driven Architecture way Event Driven Automation is gaining much popularity today. It mitigates human interaction, therefore, human error with the system. Event Driven Ansible has become a game-changer in the field. It requires things to happen when it is needed. Advancing the Ansible saying, "Automate all things."

Our mentor for the session was Magnus Glantz, Principal Solution Architect at Red Hat and Board member at Open Source Sweden. He started his session by saying, "I am going to force you to talk :)" He explained why Event Driven Architecture is gaining popularity. What is an event? What is an event? and why connect automation to the event? He further stressed on the point that Automation adds hygiene to any system.

sthlm_ansible_may_3

We can categorize automation into three ways :

  • Tactical Automation
  • Process Automation (switch flows, unify process for faster and efficient delivery)
  • Advanced Automation (complicated edge use case, AI)

Tactical Automation is something we do in a quick response. Process Automation unifies processes for faster and more efficient delivery. And complicated edge use cases, AI comes under the scope of Advanced Automation.

Path for automation maturity

  1. Automate service and response (look at the tickets) -> volume increasing
  2. Expand Sources
  3. AI OPs

Event-Driven Ansible eases automation and strengthens the infrastructure. It is a scalable, responsive automation solution that can process events containing discrete, actionable intelligence; determine the appropriate response to the event; then execute automated actions to address or remediate the event. This has potential in several areas, such as

  • Networking
  • Application
  • Cloud
  • Security
  • Infrastructure

Event-Driven Ansible helps keep a system in a desired state and automate time-consuming tasks for any IT domain. In our meetup, some network administrators got excited about EDA. They are especially considering the delicate nature of the switches. Also security can be another area of the solution. Security is another area where EDA can proven to very useful. For example : automating log security.

He ended his session with a detailed demo showing us how to use Ansible Rulebook.

sthlm_ansible_may_1

The session was interactive and frank. The meetup, which was scheduled for 2 hours, lasted for three and a half hours, where people discussed different aspects of automation technology and their stories.

Having multiple minds trying to solve something similar gives a different angle and provides efficient and various ways of solution. Join our group and share your quarries, problems, and stories. We are planning our next meetup for the 2nd week of June. Follow Ansible Stockholm meetup page for the update regarding the meetup date. See you all there.

by Anwesha Das at May 09, 2023 08:05 AM

Fixing missing yubikey trouble on fedora 38

From the time I updated to Fedora 38, I am having trouble with my Yubikey. If I remove the key, just plugging it back does not help. gpg can not detect it.

$ gpg --card-status 
gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device

The only way to get it working is restarting the pcscd service, again & again.

As Heiko pointed out, this is the trouble between pcscd and scdaemon, the second one comes via gnupg package in Fedora.

To solve the issue, first I tried the following

$ echo disable-ccid >> ~/.gnupg/scdaemon.conf
$ gpgconf --reload gpg-agent

Then I figured that I have opensc package installed, just removing that one and then a reboot solved the trouble for me.

$ sudo dnf remove opensc -y
May 07, 2023 03:00 PM

What is stopping us from using free software?

I had a funny day yesterday

I'll start with the evening, that I spend as tutor in the RoboLab; a workshop for kids aged 10-18 to build their own robot with some 3d printed parts, an ESP and everything you need of electric equipment to make some wheels move. It's a great project and I have much respect for the people who initiated and still maintain it in their free time with the children.

The space we can use for the project is called Digitallabor (digital lab) and offers anything you would want from a well equipped maker space, including a shelf full of laptops to use while you're working there.

I should not be surprised anymore, but I can't help it

Of course, all laptops run Windows. I picked one, booted up and and saw the fully bloated and ad-loaded standard installation of Windows 10. Last time a search for updates had been performed: early 2019. No customized privacy settings, nothing. Just the standard installation in all its ugliness.

I asked the people running the space why. Why? As this would be the perfect place to introduce the children to free software and even shed some light upon the difference between Free and Open Source Software and proprietary, user despising spyware (of course I did ask in a somewhat more diplomatic manner).

The answers: "The children are used to it.", "It's easier to maintain."

Yes. So last search for updates 2019. That's well maintained.

Regarding the "the children are used to it": I can confirm that children don't give a shit. If it runs Minetest then it's fine. If they have access to a PC or laptop at home at all, because in my experience most of the kids nowadays have exactly two digital media skills anyway: tapping and swiping. So this would be the perfect place to introduce them to free alternatives!

The morning was different

We're a small company with only ~12 employees, most of which are rather non-technical. So there is no IT department. Or in other words: I am the IT department. And our IT department finds it is no longer responsible to run Windows on business PCs (at least in the world outside the US). So yesterday I prepared a new PC with Fedora 38, brought it to my colleagues and asked: "Who dares to try this Linux Desktop?"

Guess who stepped forward instantly and said, "I can do that"? My ~60 year old colleage who was a medical technical assistent when I wasn't even born and a life-long Windows user. We did the initiall configuration, synced her mails and calendars, set-up printers and network drives and went through the most important peculiarities of the GNOME3 desktop. It took about 90 minutes and then she said "I guess I'm fine from here. I'll play around with this a bit to get used to the new apps". I promised her first-level support but she was working without any issues the whole day.

I'm really proud of her

So many people keep telling me it would be too hard, too much reorientation to switch operating systems, but moments like that show me that the problem may lie somewhere else. People are afraid of changes. People want to spare the effort. But I think that daring to make a change instead of doing nothing despite better knowledge will be rewarded. The next desktop PC is already prepared, so next week I will ask the question again :)

by Robin Schubert at April 28, 2023 12:00 AM

Ansible 7.5.0 is out now

Since I moved to Stockholm, I hated the weather more than the practice behavior of the Sun. But today, it did some good; it woke me up at 4:40 and felt like 7:00, so I started working. And what a productive day it was :).

I have been doing Ansible Release for three months, and before that, I was shadowing Christian to learn more about the process. But today was special. Today, I did two releases on the same day, Ansible 7.5.0 and Ansible 8.0.0a2. It is the second Alpha of the Ansible 8 series.

ansible

I did the Ansible 7.5.0 release first. It is the Ansible stable release. You can read the complete Changelog here

You can install it via pip.

pip install ansible==7.5.0 --user

Ansible 8.0.0a2 was my next target.
You can read our Roadmap for the Ansible 8 release cycle here and the Changelog here.

To follow all our updates on the Ansible project and community susbcribe to Bullhorn, our weekly Newsletter. Fun fact this week will be the 100th edition of Bullhorn.

by Anwesha Das at April 26, 2023 07:15 PM

Tumpa 0.10.0 is ready

I am happy to announce Tumpa 0.10.0 release. Tumpa is a desktop application which allows you to create OpenPGP keys and also allows uploading them to Yubikeys with a user friendly GUI. With Tumpa, all you need is a few form inputs and few clicks, and done! No more wrangling and breaking your head with command line interface.

Startscreen

This version is a complete rewrite of the initial version I released around 2 years ago. With the help from Elio and his excellent team, we have a new design. Thank you OTF for providing the funding for the work.

Saptak & I decided that the code is ready to be consumed. There are still things to work on, including the UI flows. In the coming months we are going to add more features to the application to make it super useful for advanced users too.

You can create Cv25519 or RSA4096 keys via the "Generate Key" button. You can upload any key to an attached Yubikey, but remember that to use a Cv25519 key, you will need Yubikey 5.

Showing all avaialble keys

Installation

For Linux we have an AppImage and for Apple M1/M2 devices we have a dmg. You can download them from the release page. Remember to have a look at the user guide, specially because you need to have pcscd service running on Linux.

Upload successful

Technologies used

This project works because we have Johnnycanencrypt , a Python module written in Rust to do OpenPGP operations (including Smartcard operations). Which in turn uses Sequoia Project for the rust library to create/manipulate OpenPGP keys.

The UI is made via QML, using PySide6. This also shows that we can have decent looking desktop applications in Python.

The AppImage and Apple dmg files are available because of briefcase project from BeeWare team.

Give feedback

Since the focus of Tumpa is on making the use of OpenPGP with smart cards user friendly and intuitive, we need a lot of feedback from the user. So, if you find issues and have other feedback to improve the application, feel free to submit [issues])(https://github.com/tumpaproject/tumpa/issues). We are also available in #tumpa channel on IRC on libera.chat server. Feel free to ping the IRC nicknames saptaks or kushal.

April 20, 2023 09:37 AM

Google Open Source Peer Bonus Award 2023

I am honored to be a recipient of the Google Open Source Peer Bonus 2023. Thank you Rick Viscomi for nominating me for my work with the Web Almanac 2022 project. I was the author of Security and Accessibility chapters of the Web Almanac 2022.

Google Open Source Peer Bonus 2023 Letter. Dated April 19, 2023. Dear Saptak Sengupta, On behalf of Google Open Source, I would like to thank you for your contribution to 2022 Web Almanac. We are honored to present you with a Google Open Source Peer Bonus. Inside the company, Googlers can give a similar bonus to each other for going above and beyond, so this is just a small way of saying thank you for your hard work and contributions to open source. We hope you enjoy this gift from all of us at Google and Rick Viscomi who nominated you. Thank you again for supporting open source! We look forward to your continued contributions. Best regards, Chris DiBona, Director of Google Open Source

For the last year, I have started to spend more time in contributing, maintaining and creating Open Source project and reduced the amount of contracts I usually would do. So this letter of appreciated feels great and helps me get an additional boost in continuing to do Open Source Projects.

Some of the other Open Source projects that I have been contributing and trying to spend more time on are:

In case someone is interested in supporting me to continue doing open source projects focused towards security, privacy and accessibility, I also created a GitHub Sponsors account.

April 20, 2023 07:32 AM

Converting HTML Tables to CSV

Today, I decided to analyze my bank account statement by downloading it from the day I opened my bank account. To my surprise, it was presented as a web page. Initially, my inner developer urged me to write code to scrape that data. However, feeling a bit lazy, I postponed doing so.

Later in the evening, I searched the web to find an alternate way to extract the data and discovered that HTML tables can be converted to CSV files. All I had to do was save the code in CSV format. I opened the Chrome browser's inspect code feature, copied the table, saved it with the CSV extension, and then opened the file with LibreOffice. Voila! I had the spreadsheet with all my transactions.

Cheers!

#TIL #CSV #HTML Table

April 15, 2023 05:41 PM

Keynote in PyCon Italia, 2023

My friend Dr. Brett Canon, CPython Core Dev, once said, "Came for the language and stayed for the community." It is a little different for me. I Came for the community and stayed for the love of it.

I joined the open source community as a lawyer to up my legal skills. Instead, I am now a community member who knows law.

My journey in the Open Source world has been a sail of happiness, tears, achievements, sweats, and more than anything os dreams. A dream of something which I never saw any of my friends, let alone achieving but even dreaming. Honestly, I do not consider my story to be unique, but some people think otherwise. What do you think? You can figure it out for yourself. I will be talking about this in my keynote at PyCon Italia, 2023.

I will be talking about stories, my favorite PyLadies,Python for Everyone program and how an innocent piece of technology and small initiatives from and by the community can alter the lives of real people.

Join us in PyCon Italia, 2023, for 4 days of learning, togetherness, fun, and, not to forget, good food. See you all there.

by Anwesha Das at April 12, 2023 09:44 AM

40 years of the first email to Sweden

40 years ago today, at 14:02 on 1983/04/07 (7th April), Björn Eriksen received the first ever email in Sweden. It was from Jim McKie of European Unix Network (EUnet) in Amsterdam. Björn had a VAX 780 running BSD. The following is the actual email:

SWE_Mail
Return-Path:
Date: Thu, 7 Apr 83 14:02:08 MET DST
From: mcvax!jim (Jim McKie)
To: enea!ber
Subject: Hello

You are now hooked to the mcvax. This is just a test.
Reply, we will be calling you again soon!

Ignore any references to a machine called "yoorp", it
is just a test. Mail should go to mcvax!….".

Regards, Jim McKie. (mcvax!jim).

This email was transmitted over using UUCP. After a few years, in 1986, Björn registered .se TLD.

I was not even born, when this email was received :)

April 07, 2023 02:49 PM

Dear pep582

Dear pep582,

By now, you know that your idea has been rejected, but it came with suggestions for any future ideas. You thought you could be more useful if everyone gets it in the same way, but that will also cause more maintenance burden to the upstream authors in the future. I personally tried to stay with you during this 5 year+ long journey. A lot happened in life during that time. You helped me to make new friends, and helped many young ones during the workshops.

Even though formally the PEP is rejected, the implementation will be updated as required. You helped before, and you will help many in the future too. Just the way to reach you will be different.

Kushal

April 02, 2023 06:48 AM

On Things That Last


Just something that kept coming to mind today.

I met an old friend the other day. We’d fallen out of touch and gone our separate ways.
We were both gifted watches by another friend of ours at the time.
And they pointed out to me, I was still wearing the same watch.

I gently pointed out to them, that this watch was a different one, something the better half gifted me just after we got married. This is a white dial, see. The old one was blue. And then we moved on to other things.

What I didn’t tell them, was that I still have the old one.
It’s battered.
It’s scratched.
It works.
It keeps perfect time.
It’s beautiful.1

I didn’t tell them, because I imagine, they probably wanted me to remember the times we were together, fondly. Because I imagine, they think I’d get all dewey eyed over friendships long lost.

And that’s not why I have it around at all.
It was my companion through the worst years of my life.
When I was lost, at sea and barely keeping my head above water.
When I was nursing my brutally broken heart, which then got broken again. And again.
When I had no money, and no friends.
When I was thinking of the futility of going through one more day.
That’s when I’d strap it on. And go live one more day.
I have it, because it reminds me of my resilience.
I have it, because it reminds, I decided to live.

Strange that a thing has lasted longer than my friendships.
Not so strange, when I realise that it’s no longer a gift.
It’s imbued with the echoes and memories of decades of my life. Those that I chose to live.
It’s not just mine. It’s me.

P.S. If the blue is for resilience, the white is for joy.


Feedback on this post? Mail me at feedback@janusworx.com

P.P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.P.S. Feed my insatiable reading habit.


March 31, 2023 03:03 PM

Women in tech, Stockholm 2023

What does a good day look like? For me, it is the day to spend with friends and doing what I love the most, talking and spreading knowledge about Open Source. Yesterday was one such day.

wit_23_0

Yesterday was my first-ever Women in Tech Sweden event. The first thing I noticed upon entering the venue was the long queue of women waiting for registration. Trust me, that view itself made my day.

PyLadies booth

wit_23_2

Our PyLadies Stockholm team Christine, Mariana, Gabriela, Alexendre was working relentlessly for the preparation of the event. Mariana created the website. Have you checked the new PyLadies logo? It is so cool. The logo defines the core of PyLadies Stockholm, diversity and equality. Try to find out if you can recognize some of us there :). Thank you, Mariana for this beautiful job.

I was confident that I wouldn&apost be able to attend the event. But thanks to my eternal savior Chirstine, a week before the event, she organized a ticket for me to participate in the PyLadies booth. I am so grateful to her.

At the event

wit_23_1

I choose booth duty or hallway tracks over talks at any or every conference. Today was no exception. I was at the PyLadies. I was doing booth duty after the pandemic. I had a Deja Vu feeling. I often wondered during the pandemic if the need for PyLadies is still there, especially when so many online courses are available now. But today, I and all my worries were proven wrong. People were still coming into the booth wanting to know more about PyLadies, encouraging us, and joining our mission.

wit_23_3

Now there is another hat I wear apart from being an organizer of PyLadies Stockholm. I am also a part of the Ansible community. I run the Ansible Stockholm Meetup group. Few people recognized me as an organizer of Ansible Stockholm and not as PyLadies Organizer, which is a big win for me.

Meeting with friends

Ellie and I have wanted to have an online chat for months now. Even we had to cancel our meeting last Friday. But we must learn that we can do it in person today at the event. She was among the first people I met when I reached the PyLadies booth. She is as warm and as helpful even in person. I feel lucky to call you a friend.

Today I made some new connections and revived the old ones. Today was the day of positivity, trust, and friendship. Thank you, Women in Tech for organizing this event.

by Anwesha Das at March 24, 2023 09:21 AM

Quick Fix: Go Mod File Not Found in Current Directory

As I have begun learning Go (or golang as I needed to use for searching on the web), I’ve been running into some sort of chasm that the Go language seems to have crossed1 that my old textbook obviously could not have foreseen when it was published.2

The book asks me to do a go run . to get my file to compile and run.
Go seems to having none of it.

go: go.mod file not found in current directory or any parent directory; see 'go help modules'

Some searching on the web, and I found setting the GO111MODULE environment variable to auto or off will do the trick. So …

export GO111MODULE='auto'

did indeed do the trick.

I’m not putting this in my .bashrc to make it persistent though. The Go folks must have had some reason to make this change, which I’m (currently) not aware of. Was it architectural? Security related? That is a web search for another day.

Right now the code compiles and I must be off to my next exercise.


Feedback on this post? Mail me at feedback@janusworx.com

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. I’m using v1.20.1 ↩︎

  2. I’m learning from Packt’s, The Go Workshop (Delio D’Anna et al), 2019 ↩︎

March 15, 2023 01:41 PM

Huginn: Vive La Pratique Délibérée Avec Les Flux RSS

I’ve been learning French, on my own at a snail’s pace over the past year or so.
All because I want to read Émilie du Châtelet, Dumas, Verne, and Montaigne in their native tongue.

Since I cannot do immersion nor give regular practice the time it wants, it’s … slow going But I do, do it. Regularly and deliberately. I’ve built up a vocabulary of about 500-700 words by now and I know if I keep this up, I’ll be able to do read well in time.

One way I practice is to take something in French and read it slowly, looking up words and phrases and slowly building up the missing pieces.
And to do that, I need something in French. I stumbled across the France Pittoresque site and fell in love with it. French and History! Just my kind of site.

Being the lazy bum, that I am, I went looking for an RSS feed, so that I could read their daily article in Reeder.
And hit my first stumbling block.
The good folk at France Pittoresque, don’t provide a feed. Or if they do, I couldn’t find it. Even after looking and looking. And then looking some more.1

You know where I’m going with this. This is a job for Huginn!

This is how the site looks today

click to embiggen.


And this little item is what I want to get, on a daily basis.

click to embiggen. Actually click any of them to embiggen!


So I whipped up this scenario.


  1. It scrapes that little section I mentioned above and gets me the title, the url and the description of the article


  1. And generates an RSS feed that I can then use in my feed reader!
    The feed is normally at:
    http://your-huginn-instance/users/[your-user-id-int]/web_requests/[agent-id-int]/some-'secret'-string.xml

    It looks more complicated than it is. It’ll get generated the moment you create your agent. One advantage of using a vm is that my feed is accessible from any device.


You can find the scenario code here, if you want to play with it.

Huginn has proven invaluable when it comes to doing all the little things, that I’d rather not do :)
Merci beaucoup, Huginn!


Feedback on this post? Mail me at feedback@janusworx.com

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. You’re talking to a lazy bum here. So quite possible I overlooked something. ↩︎

March 15, 2023 04:02 AM

Huginn: On Becoming a Cal Newport Podcast Packrat

First things first, you should be listening to Cal Newport and reading his books too!
His recent episode on books and reading was lovely!1

If I remember right, he started doing the podcast, a couple of months after the pandemic broke out and the world shut down. And it felt very much at the time, like one of those friendly voices across the waves in some post apocalyptic movie.
It felt warm, personal and gave me something to do in those dark days.
Ergo, I have a strong connection to it and I began saving every episode, at around episode 10 or so.

Oh, and speaking of slow productivity, it’s amazing to see what Cal has fashioned the podcast into. A slow steady drumbeat of episodic growth has led to what he call the Cal Newport Empire, with media galore.

So, like I was saying, I’ve been downloading them ever since the beginning, because I’ve some sort of unnatural attachment to them. They’re the thing I want to listen to if I’m marooned on a desert island.2

And now that I have Huginn, the computer downloads it for me, in the folder I want, with the naming convention, I want.


Here’s the scenario that does it.

  1. It scrapes the website, once every twelve hours to check for a new episode, and grabs the url and title.

  2. It uses the url to download the episode to a folder.

  3. And at the same time, sends me a notification. The title tells me what’s new :)

I probably could use Huginn to rename and copy the file to where I want, but it runs from a Docker container and does not have access to the parent filesystem.
So this little Python script3 does it for me.

from pathlib import Path
import shutil


downloads_folder = Path('/path/to/downloaded/episode/directory/')
destination_folder = Path('/path/to/destination/directory/')

for audio_file in downloads_folder.iterdir():
    if audio_file.suffix == '.mp3':
        original_name = audio_file.stem
        original_name = original_name.split('-')[2:]
        episode_number, raw_file_name = f'CNPE{original_name[0]}', (' '.join(original_name[1:])).title()

        new_file_name = episode_number+ ' - ' +raw_file_name +audio_file.suffix

        shutil.move(downloads_folder/audio_file, destination_folder/new_file_name)

It looks for any mp3 files in the specified folder, renames them the way I want them named and then moves them to my packrat archive. Et voilà!


The scenario and scripts are here, if you want them.


Feedback on this post? Mail me at feedback@janusworx.com

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. What else, do you expect a bookworm to tell ya? 😂 ↩︎

  2. with a stock of mp3 players and rechargable batteries ↩︎

  3. scheduled via crontab ↩︎

March 14, 2023 12:25 PM

Huginn: the Rube Goldbergness of Updating Hugo


Click the pic to embiggen.
Rube Goldberg illustration via James Vaughan


Ended up too sick to even tinker, since I last wrote.
Well enough now, to scratch out some prose to get my brains in gear.
Let’s start with Huginn adventure #1.
Buckle up!

Background

I found Huginn, and am using it as my man friday.1

The “Problem”

I run Hugo on two three machines.
I’m tired of updating to the latest release manually.
The computer can do it for me!

Why this way? Why? Why?!

  1. It’s fun!
  2. Yes, there are other (most likely simpler) ways to do it, but Huginn is my new shiny hammer … and well you know!
  3. Did I tell you it’s fun? :)

How now brown cow?

Some more background:

  1. I could just apt install, but I’m too posh for that 😂 I want the latest version of Hugo. And the extended version at that.
  2. I depend on two more helpers.
    a. Syncthing syncs the folder that the Hugo deb package downloads to, across all my machines
    b. Pushover sends me push notifications. Huginn integrates with it.

Having said that, here’s a pic of my four step scenario, followed by the output of each step.


  1. Scrape the Hugo github releases page to get the title.2

  1. Get the latest release title3 and fashion a url out of it.

  1. Download Hugo to a folder.

  1. Send myself a push notification.4

  1. And tada!

So we done? No, you silly goose!

We’re done downloading the installer. Said installer now needs installing.
So my very imaginatively titled install-hugo.sh script5, which runs once a day via a crontab entry …

  1. takes the file,
  2. checks to see if it’s downloaded in the last day
  3. and if so, installs it.

Mind you, it needs root privileges. And then it also …
4. deletes any stray debs more than a few days old.6

#! /usr/bin/env bash

cd /path/to/hugo/deb/file

# Install deb from the last 24 hours
find *.deb -maxdepth 1 -mmin -1440 | xargs -d '\n' dpkg -i

# Remove debs more than 2 days old
find *.deb -maxdepth 1 -mtime +2 | xargs -d '\n' rm -rf 

And now we’re done :)

Holy shiny hammer, Batgirl! It worked!


The scenario, should you install Huginn and want to import and play with, as well as the shell script are here, on Github.

Note: I had originally used the RSS agent in step one, instead of the Website agent I use now, and that was much easier. But it would detect the new feed entry and then just sit there hatching eggs, not doing anything. I wish I had a snapshot (of the missed events, not hatched eggs) but I digress. The website agent, for all its hacky xpath quackery works.


Feedback on this post? Mail me at feedback@janusworx.com

P.S. Subscribe to my mailing list!
Forward these posts and letters to your friends and get them to subscribe!
P.P.S. Feed my insatiable reading habit.



  1. it could have been girl friday too, but I call mine Bertuccio, after Edmond Dantes’ man friday ↩︎

  2. I cannot believe, I go scrape a website and learn all that xpath voodoo, just to get a measly text string. ↩︎

  3. And here’s where Huginn helps. It remembers the last title. And only proceeds if something has changed. ↩︎

  4. the push notification is just like that. something whimsical for me. It’s independent of the download. I should probably have called it Step 3a. or 2b. Never mind now. ↩︎

  5. which is also synced and available across all three machines ↩︎

  6. I let them be for a couple of days, in case a machine is not online that day. ↩︎

March 13, 2023 02:05 PM

Thank you, my VMware team!

December 12, 2022


Dear Team,

As my last day at VMware approaches, I wanted to take a moment to thank each and every one of you for the support and guidance you have given me during my time at VMware.

To dims and Navid, I am especially grateful for helping me join the great organisation and for your ongoing support. Thank you for making me feel welcomed and valued from day one.

Nikhita, your support and sponsorship have been invaluable in helping me grow in my career. You are not only a great colleague, but also a wonderful friend inside and outside of VMware. I truly mean it when I say that YOU ARE MY ROLE MODEL.

Meghana, thank you for being an amazing onboarding buddy and for being there for me through every challenge and success. Your friendship, kindness and selflessness means the world to me.

Arka and Yash, thank you for being amazing work partners and for the countless long troubleshooting and learning sessions we had together. I will miss working with you.

Nabarun, thank you for being an exceptional mentor and guiding me not only on technical matters, but also providing valuable advice and teaching me important soft skills.

Madhav, thank you for being such a kind-hearted person and always supporting me and cheering me on.

Anusha, Christian, Prasad, Akhil, Arnaud, Rajas, and Amit, thank you for sharing your wealth of professional experience with me and especially, for teaching me what it means to work hard. It has been an absolute honor to work with each of you, even if for a short time.

Finally, Kriti, Kiran, and Gaurav, thank you for supporting me throughout my journey at VMware.

Andrew, Dominik, Peri, Sayali, I never could have imagined finding such wonderful friends at VMware. I will deeply miss you all. Your friendship means so much to me.

Thank you all for being such a great team. I will always treasure the memories and the lessons I have learned here.

Best regards,

Priyanka Saggu


PS:

It’s amazing that the “DREAM TEAM” tweet that you posted about years ago, Nikhita, actually came together for me and I got to work with you. It’s still hard to believe it actually happened. Honestly, I’m feeling very emotional after typing this. Thank you for all your support always! ❤️

Screenshot 2022-12-12 at 11 51 20 AM

December 12, 2022 12:00 AM

My first custom Fail2Ban filter

On my servers that are meant to be world-accessible, the first things I set up are the firewall and Fail2Ban, a service that updates my firewall rules automatically to reject requests from IP addresses that have failed repeatedly before. The ban duration and number of failed attempts that trigger a ban can easily be customized; that way, bots- and hacker attacks that try to break into my system via brute force and trial and error can be blocked or at least delayed very effectively.

Luckily, many pre-defined and modules and filters already exist that I can use to secure my offered services. To set up a jail for sshd for instance and do some minor configurations, I only need a few lines in my /etc/fail2ban/jail.local file:

[DEFAULT]
bantime  = 4w
findtime = 1h
maxretry = 2
ignoreip  = 127.0.0.1/8 192.168.0.1/24


[sshd]
enabled   = true
maxretry  = 1
findtime  = 1d

Just be aware that you should not change /etc/fail2ban/jail.conf, as this will be overwritten by fail2ban. If a jail.local is not already present, create one.

As you can see, I set some default options about how long IPs should be banned and after how many failed tries. I also exclude local IP ranges from bans, so I'll not lock myself out every time I test a new service or setting. However, for sshd I even tighten the rules a bit, since I only use public key authentication where I don't expect a single failure from a client that is allowed to connect. All the others can happily be sent to jail.

It's always a joy but also kind of terrifying to check the jail for the currently banned IPs; the internet is not what I would call a safe place.


sudo fail2ban-client status sshd
Status for the jail:
|- Filter
|  |- Currently failed: 0
|  |- Total failed:     211
|  `- File list:        /var/log/auth.log
`- Actions
   |- Currently banned: 2016
   |- Total banned:     2202
   `- Banned IP list: ...

My own filter

Do identify IP addresses that should be banned, Fail2Ban scans the appropriate log files for failed attempts with a regular expression, as the sshd module does with my /var/log/auth.log.

Like mentioned above, there are already quite some pre-defined modules. For my nginx reverse proxy the modules nginx-botsearch, nginx-http-auth and nginx-limit-req are available; the log files they scan by default is /var/log/nginx/error.log.

However, having a look in my /var/log/nginx/access.log I regularly find lots of failed attempts that are probing my infrastructure. They look like this:

118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
118.195.252.158 - - [01/Oct/2022:02:08:59 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.7/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.4/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/db/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.3/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysqladmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/myadmin/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.1.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.9.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/pma/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.10.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.8.0.2/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/phpMyAdmin-2.11.0/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
42.236.120.51 - - [01/Oct/2022:02:26:57 +0200] "GET http://xxx.xxx.xxx.xxx:80/mysql/scripts/setup.php HTTP/1.0" 404 162 "-" "-"
...
185.183.122.143 - - [30/Sep/2022:01:19:48 +0200] "GET /wp-login.php HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:96.0) Gecko/20100101 Firefox/96"
198.98.59.132 - - [30/Sep/2022:01:51:59 +0200] "POST /boaform/admin/formLogin HTTP/1.1" 404 134 "http://xxx.xxx.xxx.xxx:80/admin/login.asp" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:71.0) Gecko/20100101 Firefox/71.0"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /.env HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:29 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /config.json HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"
20.168.74.192 - - [30/Sep/2022:01:54:30 +0200] "GET /.git/config HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Linux; U; Android 4.4.2; en-US; HM NOTE 1W Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 UCBrowser/11.0.5.850 U3/0.8.0 Mobile Safari/534.30"

I don't use PhpMyAdmin and I don't host a wordpress site (requests to wp-login and wp-admin are pretty common) and I would prefer to ban IPs that scan my infrastructure for these services. So I wrote a new filter to scan my nginx/access.log file for requests of that kind.

In /etc/fail2ban/filters.d/nginx-access.conf I added the following definition:

[Definition]

_daemon = nginx-access
failregex = (?i)^<HOST> .*(wp-login|xmlrpc|wp-admin|wp-content|phpmyadmin|mysql).* (404|403)
  • (?i) makes the whole regular expression case insensitive, so it will capture phpmyadmin and PhpMyAdmin equally.
  • ^<HOST> will look from the start of each line to the first space for the IP address. <HOST> is a defined capture group from Fail2Ban, that must be present in failregexes to let Fail2Ban know who to ban.
  • .* matches any character, and an arbitrary number of them
  • (wp-login|wp-admin...) these are the request snippets to look for; in parentheses and separated with the pipe operator, it will look for matches of either of the given strings.
  • (404|403) are http responses for "file/page not found" and "forbidden". So if these pages are not available or not meant to be accessed, this rule will be triggered.

In my jail.local I add the following section to use the new filter:

[nginx-access]
enabled   = true
port      = http,https
filter    = nginx-access
logpath   = /var/log/nginx/access.log

Restart the fail2ban service (e.g. systemctl restart fail2ban) to enable the new rule.

I started with only a few keywords to filter, but the used regular expression can easily be expanded by further terms.

by Robin Schubert at October 01, 2022 12:00 AM

Hope - Journal

This is accompanied with Hope

This is to maintain a daily journal of the efforts. This is to be seen from bottom to top.

To explain the jargons:

  • cw: current weight
  • gw: goal weight

Ok, lets start.

September 12, 2022

- cw: 80.3 kgs

Starting off things again, After coming back to Bangalore, I had a tough time setting things up, so mostly was eating food from outside and gained like 10kgs in the past month, haha!

This time it’s bit different from the last as I will be hitting the gym, and walking as well. Let’s see how it goes!

To begin, I started with a 15min walk on treadmill with 6kmph and incline 10. In the evening, I went for a walk of 1.53km with a pace of 10:05min/km

I chugged a total of 1.25L of water yesterday. Need to bring this up to 4L a day.

September 12, 2022 12:00 AM

Connecting Laptop and Phone | KDE Connect

I wanted an application which connects my phone with my laptop so that I can get important notifications from my phone to my laptop. First, I tried scrapy; however, It was more like remote desktop protocol, so it was not fit for my purpose. Then I found KDE Connect. I found it useful for my case, as it uses wifi for connection.

KDE Connect mobile application has a lot of features.

Using KDE, I can send my clipboard from my phone to my laptop and vice-versa; I can even send files to my laptop. The feature I most liked was the slideshow remote using this one can control ppt remotely from their phone. However, the remote input feature is not working for my device. I will try to fix it if possible.

The feature I like in the desktop client is Ring device; due to this, I can always find my device if it’s silent, and my phone is connected to wifi easily. However, if my device is not connected to the internet, it is a hassle to find it.

by shivam at September 06, 2022 06:12 PM

Using ssh-keygen for generating ssh keys

I wanted to create a ssh key for the web server ssh login. I searched and found this article.

I have used ssh-keygen in the past by reading a tutorial, but this time I wanted to learn about it properly

After reading the article, I learned I could select the cryptographic algorithm for generating the key.

ssh-keygen -t algotype -b keysize -f /path/filename

I tried generating the keys with different algorithms and different key sizes. For some algorithms, key sizes are fixed however, for some, key sizes are not fixed. If I enter a large key size, it will take significant time to generate the key.

I will soon try generating the keys using my hardware key, as it can eliminate the need for passphrases.

by shivam at September 04, 2022 06:07 PM

The Shift

My Nani (maternal grandmother) used to tell me about her experiences and story of the village in which she used to live with her inlaws. She said that women of that time asked each other in the evening if their horses had arrived at their homes. I asked what that meant, and she explained that people used to travel by horse, so by asking about the horse, the ladies meant if the family member or the husband of the lady she asked the question had come or not as travels used to take days at that time. She mentioned everyone used to have a horse at home as we have bikes and cars now. Then they were replaced by bicycles and new automobiles.

I have observed that the things everyone owned in the past owned have been replaced by new technology, and the old thing has become a luxury. For example, everyone used to have a bicycle at home, but now they are gone, owned mainly by upper-class people. The same goes with horses once; almost everyone owned a horse now majorly rich people own horses for various things.

There will be many things we are using right now, and they will soon be replaced by new things we will never notice until people start selling them as vintage items. There are many things like radio and cameras that are not easily available in the market right now however there are a lot of personal memories associated with those things. So I think that’s why people buy or have old items.

by shivam at September 03, 2022 10:26 AM

How I started programming

My three kids are now 5, 7 and 9 years old. Meanwhile, they all have their own rooms which, for the two older school kids, contain desks with a Raspberry Pi 400 on them. They use it to look up pictures of Pokemon, to listen music and to play minetest, supertuxkart or the secret of monkey island :) Well, they also used for joining classes remotely during lockdown things.

My primary intention was, to make the computer accessible so - whenever their interest arouses - they could play around and discover on their own. Today I think that the number of possibilities may be way too high to just sit down and start with something specific.

I actually consider to run the Pis in some kind of kiosk mode, to reduce distraction. I remember that, on the first computer I used, we ran one program at a time. If you decided to run another program, you would turn off the computer, change the floppy and restart. Of course it's nice to have multiple things running at once on a computer, but to learn something new, I would argue that running one thing and only this one thing, might be best.

Our first family computer

Thinking back when I was their age, it must have been the time when my father received an old Amstrad PCW (Joyce) from a friend, our first computer.

I was fascinated by that machine, I loved the green on black text and the different noises it made - especially the dot matrix printer noises :D My father used it for word processing and because that was all he needed, it was all he ever tried. I also loved editing text in locoscript (which was just awesome) and playing the few games that were available.

However, the Joyce came with a BASIC and with the Logo programming language. I had no idea what either of them was, nor had anyone in our family. So one day I grabbed the manuals (which luckily were in German) and started learning Logo and running the examples until I was able to draw my own little pictures. In a playful manner I learned the concepts of algorithms; of variables, loops and subroutines.

At that time, BASIC was still incomprehensible for me. This changed when my parents, which wanted to foster my interest but didn't quite know how, gifted me a VTech SL, an educational computer that could not really do much, but came with a BASIC and a manual that was actually appropriate for children and that I could follow along nicely. So I soon had plenty of those little programs that would ask you for your name or age and then make funny comments about that. Always, my main motivation to write code was to eventually develop a cool game. Good for me that some of my friends shared that interest and one in particular I considered a real programming wizard.

Interest amplification through friends

When I was young, LAN parties were the real thing. I saved money for a then cheap Medion PC - an Intel Pentium D with an NVidia RIVA TNT graphics card. The only condition my parents put upon me was, that I would have pass an official typewriter's course - "The computer is not just a toy, learn touch-typing so you can use it for work/school".

So you would carry over your midi tower and 17 inch CRT monitor and a box of cables to a friend's basement and forget about daytime and the rest of the world for one weekend over Duke Nukem 3D, Starcraft and Jedi Knight - Dark Forces 2. The friend at whom we met was two years older and first impressed me when we were missing the last BNC terminator to finalize or LAN connection (yes, that's when before the time of Ethernet, all PCs had to be hooked up in line, connected by a coaxial cable and cleanly terminated on both ends). So he grabbed an ohmmeter, measured the resistance the terminator had to have, found a fitting resistor in a drawer and bent it into shape to close our network connection.

He was regularly programming in Pascal and I was blown away when he showed us his self-written window manager/desktop environment. It could not do too much, but show files as icons which you could nicely customize in color, but to me it was magic. Together we installed Borland Pascal on my machine and he showed me how to use the built-in documentation system. However, my English skills at that time were simply not good enough to really make sense of that excellent documentation. So I couldn't wait for the computer science course in school to start.

Two extremes of school computer science

Computer science. Awesome! I was so excited about it, that it hurt even more when we realized that it would be a complete disappointment. The first "computer science" course I had in school was nothing but a Microsoft Word/Excel/Powerpoint introduction, and not even a good one. Well, we endured and in the next year the teacher changed and so did the course. And that may have been the best class I've had, ever.

The new computer science teacher was also physics teacher and was not too famous with the kids. He had a quite nerdy 70s look which I appreciate today, but was inscrutable for us when we were young, and a funny name that translates to "beef". However, the topics he covered and the hands-on way he taught them were just great. Within two years we started with the basics of the Pascal programming language and the workings of computer algorithms with a Logo-like environment. After that we switched over to abstract data types (queues, lists, linked lists, trees etc.), computer architecture down to a level of "what does an ALU do, and how?", and finally we wrote our own assembly code to draw icons and images on the screen. That must have been in old unprotected mode, where you could just write into the video adapter memory directly, which was mapped into the PC's memory.

Soon enough we found us bumming instruction code lines from our assembly programs to find the most elegant and shortest solution to a problem, looking over each other's shoulders and admiring clever tricks. When I read Steven Levy's Hackers many years later, I perfectly remembered that feeling when reading about the first MIT hackers, hacking on the PDP-1.

We finished the course with a group project: We developed an idea for a 2D racing game we called "Geisterfahrer" (wrong-way driver) where the player had to dodge oncoming traffic. We identified the different tasks we had to do, planned what routines needed to be programmed and assigned teams. It didn't work out well, but hey, the concept was superb.

College, work and DGPLUG

I hate to admit it, but back then in my school days, I didn't like the computer science course very much. I simply could not appreciate the value of these lessons; I was bored by abstract data types, didn't know what I would ever need computer architecture knowledge for and was a bad team player in our final programming task. Only when I was in college and studied physics and computer science, I realized just how good this school course has been. In two years at college we covered exactly the same topics going just as deep, but this time I was in a course with ~200 people instead of just 20.

I learned Java and C/C++ basics at college, and when I applied for a project to write my bachelor's thesis, I was looking for programming tasks in physics working groups; there were and still are plenty of them. I did the same when I started my master's thesis, this time programming in Java and C# (just because the syntax was similar but the performance was way better), and after that once again the same to find a PhD position - this time in a medical field. I started to learn Python with Mark Pilgrim's Dive Into Python, which was an excellent choice for me, because it gave plenty of examples and comparisons with other programming languages I already knew.

There's not much interesting to say from that era except one thing: In terms of programming, I was still a bad team player. The code I wrote was hard to maintain; I wrote it alone and I wrote it for me to work. I imagine the poor people coming to the working groups to continue my work had hard times. I simply never learned how to collaboratively develop software - this part was actually not covered in college.

This only changed when I learned about DGPLUG and the summertraining, where - as I read - people were taught what need be known to start contributing to Open Source projects. I've written about that project before and every summer I realize how much it has changed the way I work today for the better. And it is only now, that I feel like I almost know what I am doing, and why, when I write code.

by Robin Schubert at September 02, 2022 12:00 AM

The Debug Diary - Chapter I

The Debug Diary – Chapter I

Lately, I was debugging an issue with the importer tasks of our codebase and came across a code block which looks fine but makes an extra database query in the loop. When you have a look at the Django ORM query

jato_vehicles = JatoVehicle.objects.filter(
    year__in=available_years,<more_filters>
).only("manufacturer_code", "uid", "year", "model", "trim")

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
        <logic>
    ymt_key = (entry.year, entry.model, entry.trim_processed)
...

you will notice we are using only, which only loads the set of fields mentioned and deferred other fields, but in the loop, we are using the field trim_processed which is a deferred field and will result in an extra database call.

Now, as we have identified the performance issue, the best way to handle the cases like this is to use values or values_list. The use of only should be discouraged in the cases like these.

Update code will look like this

jato_vehicles = JatoVehicle.objects.filter(
    year__in=available_years,<more-filters>).values_list(
    "manufacturer_code",
    "uid",
    "year",
    "model",
    "trim_processed",
    named=True,
)

for entry in jato_vehicles.iterator():
    if entry.manufacturer_code:
        <logic>
    ymt_key = (entry.year, entry.model, entry.trim_processed)
...

By doing this, we are safe from accessing the fields which are not mentioned in the values_list. If anyone tries to do so, an exception will be raised.

** By using named=True we get the result as a named tuple which makes it easy to access the values :)

Cheers!

#Django #ORM #Debug

August 30, 2022 07:34 AM

LDAP authentication on Home Assistant

Last week I wrote a few sentences about a beautiful script I found, to authenticate against an LDAP server, which could be used e.g. on the Home Assistant, a platform to manage home automation and the like. We deployed a Home Assistant instance at work, to monitor temperatures in various rooms and fringes, and to raise notifications and alarms, should temperatures exceed certain thresholds. All team members should be able to log into the system, using their central login credentials from the LDAP server.

Unforeseen difficulties

The shell script uses either of the command line utilities ldapsearch (from the openldap-clients package) or curl to make a request to the LDAP server, which requires a valid username and password. Both scripts will return an error code > 0 if something goes wrong; as usual, the exit code 0 will let us know if the command worked and thus if the username/password combination was correct. Further, the LDAP server can be queried for some extra attributes like the displayName or others, which can be mapped into the requesting system.

However, there was one issue I hadn't anticipated; neither ldapsearch nor curl compiled with LDAP support was available on the Home Assistant.

There are plenty of ways to deploy Home Assistant. We had a spare Raspberry Pi and decided to use the HassOS distribution that is recommended when installing on a Pi. HassOS (the Home Assistant Operating System) is a minimalistic operating system that deploys the individual modules of Home Assistant as containers. The containers that are deployed are usually built on Alpine images. However, there were two problems:

  1. Software that I would install in any container would not be persistent but vanish on every re-boot.
  2. I couldn't even locate, let alone access the correct container that does the authentication.

Trial and error

As proof of concept, I installed an SSH integration that would at least let me communicate with parts of the Home Assistant system via ssh. The ssh container per default also mounts the config and other persistent directories of Home Assistant.

So I downloaded the ldap-auth.sh script to the persistent config folder and started by adding the ldapsearch tool, with apk add openldap-clients and configured ldap-auth.sh until I was able to authenticate. I updated the Home Assessment config with an auth_provider section like this:

homeassistant:
  auth_providers:
    - type: command_line
      command: /config/scripts/ldap-auth.sh
      meta: true
    - type: homeassistant

Beware! Do include type: homeassistant in your list of auth providers or you will lock yourself out of the system if the script does not work correctly (just like I did).

After reloading the config, login with the command_line type of course failed, but I didn't find any logs that would propagate the error message, so I added some echo lines in the script manually, to find out that ldapsearch cannot be found by the authenticating container.

So I tried my luck with curl; however I could not make any reasonable request without the built-in LDAP support.

Build my custom curl

So I figured I basically had three possibilities:

  1. Using a different distribution of Home Assistant that I maybe would be able to control better
  2. request the feature of having openldap-clients baked into the container images, or build (and maintain) the image myself or
  3. build curl for my target container with all the needed functions linked statically into one binary.

I assumed that all containers in the Pi's Home Assistant ecosystem would be the same architecture, which is Alpine on aarch64 for the ssh container. So I installed all dependencies I needed on the ssh container, cloned the curl repo and started configuring, installing missing dependencies on the fly.

./configure --with-openssl --with-ldap --disable-shared

Choosing the ssl library is mandatory; --disable-shared should prevent the use of any shared library, so any dependency I had to install that would not be available on the target machine later.

The built went through and I had an LDAP enabled curl that I could test my requests with, so again I tinkered with the ldap-auth.sh script until it would succeed.

However, when used from the web interface it would not work, again, this time complaining about missing dependencies, which I thought I had all included.

Checking the compiled binary I found 769.4K, so much bigger than my 199K system curl, so something must have been linked statically. Looking up shared object dependencies revealed what was missing:

[core-ssh ~]$ ldd curl
        /lib/ld-musl-aarch64.so.1 (0x7f930c0000)
        libssl.so.1.1 => /lib/libssl.so.1.1 (0x7f92f76000)
        libcrypto.so.1.1 => /lib/libcrypto.so.1.1 (0x7f92d26000)
        libldap.so.2 => /lib/libldap.so.2 (0x7f92cc1000)
        liblber.so.2 => /lib/liblber.so.2 (0x7f92ca3000)
        libc.musl-aarch64.so.1 => /lib/ld-musl-aarch64.so.1 (0x7f930c0000)
        libsasl2.so.3 => /lib/libsasl2.so.3 (0x7f92c79000)

While this is still a lot less dependencies than my system installed curl:

=> ldd `which curl`
        linux-vdso.so.1 (0x00007ffc8fdb6000)
        libcurl.so.4 => /usr/lib/libcurl.so.4 (0x00007fce55263000)
        libc.so.6 => /usr/lib/libc.so.6 (0x00007fce55057000)
        libnghttp2.so.14 => /usr/lib/libnghttp2.so.14 (0x00007fce5502c000)
        libidn2.so.0 => /usr/lib/libidn2.so.0 (0x00007fce5500a000)
        libssh2.so.1 => /usr/lib/libssh2.so.1 (0x00007fce54fc9000)
        libpsl.so.5 => /usr/lib/libpsl.so.5 (0x00007fce54fb6000)
        libssl.so.1.1 => /usr/lib/libssl.so.1.1 (0x00007fce54f1f000)
        libcrypto.so.1.1 => /usr/lib/libcrypto.so.1.1 (0x00007fce54c3f000)
        libgssapi_krb5.so.2 => /usr/lib/libgssapi_krb5.so.2 (0x00007fce54bea000)
        libzstd.so.1 => /usr/lib/libzstd.so.1 (0x00007fce54b41000)
        libbrotlidec.so.1 => /usr/lib/libbrotlidec.so.1 (0x00007fce54b33000)
        libz.so.1 => /usr/lib/libz.so.1 (0x00007fce54b19000)
        /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fce55380000)
        libunistring.so.2 => /usr/lib/libunistring.so.2 (0x00007fce5496b000)
        libkrb5.so.3 => /usr/lib/libkrb5.so.3 (0x00007fce54892000)
        libk5crypto.so.3 => /usr/lib/libk5crypto.so.3 (0x00007fce54862000)
        libcom_err.so.2 => /usr/lib/libcom_err.so.2 (0x00007fce5485c000)
        libkrb5support.so.0 => /usr/lib/libkrb5support.so.0 (0x00007fce5484d000)
        libkeyutils.so.1 => /usr/lib/libkeyutils.so.1 (0x00007fce54846000)
        libresolv.so.2 => /usr/lib/libresolv.so.2 (0x00007fce54831000)
        libbrotlicommon.so.1 => /usr/lib/libbrotlicommon.so.1 (0x00007fce5480e000)

there were still way too many shared libraries involved for my taste.

I even asked in #curl in the libera net what I could have done wrong or misunderstood.


14:57:34    schubisu | hi everyone! I'm trying to build a statically linked curl
                     | and configured with `--with-openssl --with-ldap --disable-shared`.
                     | However, when I run the binary on another machine it says
                     | it cannot find the shared libraries libldap and liblber. Did I
                     | misunderstand static linking?
15:27:25      bagder | static linking is a beast

Well, it was nice to hear that it may not have been entirely my fault :) bagder pointed me to Static curl, a github repository that builds static releases for multiple platforms (YAY), but sadly also with disabled LDAP support (AWWW). Running the build script with LDAP enabled also didn't run through.

An ugly hack to the rescue

Having spent way too much time on this issue, I went ahead with something that may be an ugly hack, but it's also a "works for me": I had already copied the statically linked curl in the persistent config folder already, so I would just add the missing libraries there as well.

I figured that from the 7 shared dependencies, 4 were available in the standard Alpine image anyway, so I was missing only three files:

  • libldap.so.2
  • liblber.so.2
  • libsasl2.so.3

that I copied from my ssh container into the persistent storage. I adjusted the ldap-auth.sh script one last time to add one line:

export LD_LIBRARY_PATH="/config/scripts"

and that did the trick.

I also confirmed that on the fresh system after re-boot, everything is still in place and working beautifully :)

by Robin Schubert at August 26, 2022 12:00 AM

Introducing Blogging Friday

It's not that I don't have things to write about, in fact I learn interesting new things every week. I have however never integrated a dedicated time to write new posts in my weekly routine. So to not procrastinate any further, I start Blogging Friday right now with some things I did this week.

Lower the threshold for new posts

I'm using lektor as static site generator; it's lightweight and new posts are really quick to generate. All it takes is a new sub-folder in my blog directory, containing a contents.lr file with a tiny bit of meta information. Apparently this little effort is already enough to trigger my procrastination. So to get this hurdle out of the way a little shell script is quickly written:

#!/usr/bin/env bash
#filename: new_post.sh

if [ -z $1 ]; then
    echo "usage: $0 <title>"
    exit 1
fi

posttitle="$*"
basepath="/home/robin/gitrepos/myserver/blog/content/blog"
postdir=$(echo $posttitle | sed -e "s/ /_/g" | tr "[:upper:]" "[:lower:]")
fullpath="$basepath/$postdir"
postdate=$(date --iso)

if [ -e "$fullpath" ]; then
    echo "file or directory $postdir already exists"
    exit 2
fi

mkdir "$fullpath"
echo "
title: $posttitle
---
pub_date: $postdate
---
author: Robin Schubert
---
tags: miscellaneous, programming
---
status: draft
---
body:
" > "$fullpath/contents.lr"

echo "created empty post: $postdir"

LDAP authentication for random services

I've integrated a few web services in our intranet at work, like a self hosted gitlab server, a zammad ticketing system, nextcloud and the likes. One requirement to integrate well in our ecosystem, is the possibility to authenticate with our OpenLDAP server. Those services I configures so far all had their own way means to authenticate against LDAP; some need external plugins, some are configured in web interfaces and others in configuration files. However, honestly I never understood what they did under the hood.

I had a little epiphany this week, when I tried to integrate a homeassistant instance. Homeassistant does not have a fancy front-end to do this, instead this is realized with a simple shell script. There's an example on github which can be used and is actually not that hard to comprehend.

In summary what is does is to make a request to the LDAP server, either via ldapsearch (part of the openldap-tools package) or curl (needs to be compiled with LDAP integration). An example to make a request with ldapsearch could look like this:

ldapsearch -H ldap://ip.of.ldap.server \
    -b "CN=Users,DC=your,DC=domain,DC=com" \
    -D "CN=Robin Schubert,CN=Users,DC=your,DC=domain,DC=com" \
    -W

Executed from the command line, this will prompt for the user's password and make the request to the server. If everything works fine, the command will exit with exit code 0; if different from 0, the request failed for whatever reason. This result is passed on.

That's it. Nothing new. Why then didn't I think of such a simple solution? The request over ldapsearch can of course be further refined, adding filters and pipe the output through sed to map e.g. display names or groups and roles.

Playing with PGP in Python using PGPy

I was exploring different means to deal with electronic signatures in Python this week. First library I found was python-gnupg; I should have been more suspicious when I saw that the last update has been 4 years ago. They may be calling it pretty bad protocol for a reason. It is a wrapper around the gpg binary, using Python's subprocess to call it. This was not really what I wanted. For similar reasons, Kushal started johnnycanencrypt in 2020; a Python library that interfaces the Rust OpenPGP lib sequoia-pgp and which I'm yet to explore further.

A third option I found is PGPy, a pure Python implementation of OpenPGP. Going through the examples of their documentation it feels straight forward; for the relatively simple use case I have (managing keys, signing and verifying signatures), it should be perfectly usable.

That's been my week

Nothing of what I tried this week was groundbreaking or new, but it either interested me or was keeping me busy in some way. I wonder how statistics would look like if I would count how many times I look up the same issues and problems on the internet. Maybe writing down some of them will help me remember - or at least give me the possibility to look things up offline in my own records ;)

by Robin Schubert at August 19, 2022 12:00 AM

Kubernetes 1.25 Enhancements Role Lead! #33

June 14, 2022

Extending on my earlier post about the Kubernetes Release Team


I’m serving as the Enhancements Role Lead for the current Kubernetes 1.25 Release Team.

As a role lead this time, I have a group of five outstanding shadows that I am not only mentoring to become future leads, but I am also learning from them - both “how to teach” and “how to learn”

I haven’t posted in a long time (ups & downs & new roles & responsibilities & sometimes you don’t feel like doing anything at all & it literally takes all the energy even to do the required)

So, just adding that I was also an Enhancements Shadow on the Kubernetes 1.24 Release Team, and my former role lead, Grace Ngyuen, nominated me to be the next role lead at the conclusion of the previous release cycle.

Screenshot 2022-06-14 at 6 12 05 PM

When I look back on my time throughout these three cycles, I’m amazed at how much I’ve learned. It’s been a great experience. 🙂 Not only did I learn, but I also felt recognized.

Currently, We’re at Week 4 of the 1.25 release cycle, and it’s one of the busiest for the Enhancements role (as we’re almost approaching Enhancements Freeze in a week). I would say, we’re doing good so far! 😄


And one more thing before I finish up this small post!

I got to go to my first ever KubeCon event in person!

I had the opportunity to attend the KubeCon EU 2022 event in Valencia, Spain (my first ever international travel as well). I was astonished that so many people knew who I was (anything more than zero was “so many” for me) and that I already belonged to a tiny group of people. It was an incredible feeling.

I’m not a very photo photo person, but sharing some 🙂

52093810604_21284f81cc_o

Image from iOS (5)

52093788719_4489628efa_o

June 14, 2022 12:00 AM

Progressive Enhancement is not anti-JavaScript

Yesterday, I came across a tweet by Sara Soueidan, which resonated with me. Mostly because I have had this discussion (or heated arguments) quite a few times with many folks. Please go and read her tweet thread since she mentions some really great points about why progressive enhancement is not anti-js. As someone who cares about security, privacy, and accessibility, I have always been an advocate of progressive enhancement. I always believe that a website (or any web-based solution) should be accessible even without JavaScript in the browser. And more often than not, people take me as someone who is anti-JavaScript. Well, let me explain with the help (a lot of help) of resources already created by other brilliant folks.

What is Progressive Enhancement?

Progressive enhancement is the idea of making a very simple, baseline foundation for a website that is accessible and usable by all users irrespective of their input/output devices, browsers (or user-agents), or the technology they are using. Then, once you have done that, you sprinkle more fancy animations and custom UI on top that might make it look more beautiful for users with the ideal devices.

I know I probably didn't do a perfect job explaining the idea of progressive enhancement. So honestly, just go and watch this video on progressive enhancement by Heydon Pickering.

So how to do this Progressive Enhancement?

If you saw the video by Heydon, I am sure you are starting to get some idea. Here I am going to reference another video titled Visual Styling vs. Semantic Meaning, which was created by Manuel Matuzović. I love how, in this video Manuel shares the idea of building first semantically and then visually styling it.

So I think a good way to do progressive enhancement according to me is:

  1. Start with HTML - This is a very good place to start, because not only does this ensure that almost all browsers and user devices can render this, but also it helps you think semantically instead of based on the visual design. That already starts making your website not only good for different browsers, but also for screen reader and assistive technology users.

  2. Add basic layout CSS progressively - This is the step where you start applying visual designs. But only the basic layouts. This progressively enhances the visual look of the website, and also you can add things like better focus styles, etc. Be careful and check caniuse.com to add CSS features that are well supported across most browsers in different versions. Remember what Heydon said? "A basic Layout is not a broken layout".

  3. Add fancy CSS progressively - Add more recent CSS features for layouting and to progressively enhance the visual styling of your website. Here you can add much more newer features that make the design look even more perfect.

  4. Add fancy JavaScript sparkles progressively - If there are animations, and interactions that you would like the user to have that is not possible by HTML & CSS, then start adding your JavaScript at this stage. JavaScript is often necessary for creating accessible custom UIs. So absolutely use when necessary to progressively enhance the experience of your users based on the user-agents they have.

SEE! I told you to add JavaScript! So no, progressive enhancement is not about being anti-JavaScript. It's about progressively adding JavaScript wherever necessary to enhance the features of the website, without blocking the basic content, layout and interactions for non-JavaScript users.

Well, why should I not write everything in JavaScript?

I know it's trendy these days to learn fancy new JavaScript frameworks and write fancy new interactive websites. So many of you at this point must be like, "Why won't we write everything in JavaScript? Maybe you hate JavaScript, that's why you are talking about these random HTML & CSS things. What are those? Is HTML even a programming language?"

Well firstly, I love JavaScript. I have contributed to many JavaScript projects, including jQuery. So no I don't hate JavaScript. But I love to use JavaScript for what JavaScript is supposed to be used for. And in most cases, layouting or loading basic content isn't one of them.

But who are these people who need websites to work without JavaScript?

  • People who have devices with only older browsers. Remember, buying a new device isn't so easy in every part of the world and sometimes some devices may have user-agents that don't support fancy JavaScript. But they still have the right to read the content of the website.
  • People who care about their security and privacy. A lot of security and privacy focused people prefer using a browser like Tor Browser with JavaScript disabled to avoid any kind of malicious JavaScript or JavaScript based tracking. Some users even use extensions like NoScript with common browsers (firefox, chrome, etc.) for similar reasons. But just because they care about their security and privacy doesn't mean they shouldn't have access to wesite content.
  • People with not so great internet. Many parts of the world still don't have access to great internet and rely on 2G connections. Often loading a huge bundled JavaScript framework with all it's sparkles and features takes unrealistically long time. But they should still be able to access content from a website article.

So, yes. It's not about not using JavaScript. It's more about starting without JavaScript, and then adding your bells and whistles with JavaScript. That way people who don't use JavaScript can still access atleast the basic content.

See this amazing example of progressive enhancement using JavaScript by Adrian Roselli: https://twitter.com/aardrian/status/1527735474592284672

Here is another really great talk by Max Böck in id24: https://www.youtube.com/watch?v=8RdrRCq8VzU

May 20, 2022 08:48 PM

Django: How to acquire a lock on the database rows?

select_for_update is the answer if you want to acquire a lock on the row. The lock is only released after the transaction is completed. This is similar to the Select for update statement in the SQL query.

>>> Dealership.objects.select_for_update().get(pk='iamid')
>>> # Here lock is only required on Dealership object
>>> Dealership.objects.select_related('oem').select_for_update(of=('self',))

select_for_update have these four arguments with these default value – nowait=False – skiplocked=False – of=() – nokey=False

Let's see what these all arguments mean

nowait

Think of the scenario where the lock is already acquired by another query, in this case, you want your query to wait or raise an error, This behavior can be controlled by nowait, If nowait=True we will raise the DatabaseError otherwise it will wait for the lock to be released.

skip_locked

As somewhat name implies, it helps to decide whether to consider a locked row in the evaluated query. If the skip_locked=true locked rows will not be considered.

nowait and skip_locked are mutually exclusive using both together will raise ValueError

of

In select_for_update when the query is evaluated, the lock is also acquired on the select related rows as in the query. If one doesn't wish the same, they can use of where they can specify fields to acquire a lock on

>>> Dealership.objects.select_related('oem').select_for_update(of=('self',))
# Just be sure we don't have any nullable relation with OEM

no_key

This helps you to create a weak lock. This means the other query can create new rows which refer to the locked rows (any reference relationship).

Few more important points to keep in mind select_for_update doesn't allow nullable relations, you have to explicitly exclude these nullable conditions. In auto-commit mode, select_for_update fails with error TransactionManagementError you have to add code in a transaction explicitly. I have struggled around these points :).

Here is all about select_for_update which you require to know to use in your code and to do changes to your database.

Cheers!

#Python #Django #ORM #Database

May 14, 2022 02:06 PM

There is a lot more to autocomplete than you think

Anyone who has dealt with <form> tag in HTML might have come across the autocomplete attribute. Most developers just put autocomplete="on" or autocomplete="off" based on whether they want users to be able to autocomplete the form fields or not. But there's much more in the autocomplete attribute than many folks may know.

Browser settings

Most widely used browsers (Firefox, Chrome, Safari, etc.), by default, remember information that is submitted using a form. When the user later tries to fill another form, browsers look at the name or type attribute of the form field, and then offer to autocomplete or autofill based on the saved information from previous form submissions. I am assuming many of you might have experienced these autocompletion suggestions while filling up forms. Some browsers, like Firefox, look at the id attribute and sometimes even the value of the <label> associated with the input field.

Autofill detail tokens

For a long time, the only valid values for the autocomplete attribute were "on" or "off" based on whether the website developer wanted to allow the browser to automatically complete the input. However, in the case of "on", it was left entirely to the browser how they determine which value is expected by the input form. Now, for some time, the autocomplete tag allows for some other values, which are collectively as a group called autofill detail tokens.

<div>
  <label for="organization">Enter your credit card number</label>
  <input name="organization" id="organization" autocomplete="organization">
</div>

These values help tell the browser exactly what the input field expects without needing the browser to guess it. There is a big list of autofill detail tokens. Some of the common ones are "name", "email", "username", "organizations", "country", "cc-number", and so on. Check the WHATWG Standard for autofill detail tokens to understand what are the valid values, and how they are determined.

There are two different autofill detail tokens associated with passwords which have some interesting features apart from the autocompletion:

  • "new-password" - This is supposed to be used for "new password field"s or for "confirm new password field"s. This helps separate a current password field from a new password field. Most browsers and most password managers, when they see this in autocomplete attribute, will avoid accidentaly filling existing passwords. Some even suggest a new randomly generated password for the field if autocomplete has "new-password" value.
  • "current-password" - This is used by browsers and password managers to autofill or suggest autocompletion with the current saved password for that email/username for that website.

The above two tokens really help in intentionally separating new password fields from login password fields. Otherwise, the browsers and password managers don't have much to separate between the two different fields and may guess it wrong.

Privacy concerns

Now, all of the above points might already be giving privacy and security nightmares to many of you. Firstly, the above scenario works only if you are on the same computer, using the same accounts, and the same browsers. But there are a few things you can do to avoid autocompletion, or saving of the data when filling up the form.

  • Use the browser in privacy/incognito mode. Most browsers will not save the form data submitted to a website when opened in incognito mode. They will, however still suggest autocompletion, based on the saved information from normal mode.
  • If you already have autocomplete information saved from before, but want to remove now, you can. Most browsers allow you to clear form and search history from the browser.
  • If you want to disable autofill and autocomplete, you can do that as well from browser settings. This will also tell the browsers to never remember the values entered into the form fields.

You can find the related information for different browsers here:

Now, if you are a privacy-focused developer like me, you might be wondering, "Can't I as a developer help protect privacy?". Yes, we can! That's exactly what autocomplete="off" is still there for. We can still add that attribute to an entire <form> which will disable both remembering and autocompletion of all form data in that form. We can also add autocomplete="off" individually to specific <input>, <textarea>, <select> to disabled the remembering and autocompletion of specific fields instead of the entire form.

PS: Even with autocomplete="off", most browsers still offer to remember username and password. This is actually done for the same reason digital security trainers ask to use password managers: so that users don't use the same simple passwords everywhere because they have to remember. As a digital security trainer, I would still recommend not using your browser's save password feature, and instead using a password manager. The password managers actually follow the same rule of remembering and auto-filling even with autocomplete="off" for username and password fields.

Accessibility

So, as a privacy-focused developer, you might be wondering, "Well, I should just use autocomplete="off" in every <form> I write from today". Well, that raises some huge accessibility concerns. If you love standards, then look specifically at Understanding Success Criterion 1.3.5: Identify Input Purpose.

There are folks with different disabilities who really benefit from the autocomplete tag, which makes it super important for accessibility:

  • People with disabilities related to memory, language or decision-making benefit immensely from the auto-filling of data and not need to remember every time to fill up a form
  • People with disabilities who prefer images/icons for communication can use assistive technology to add icons associated with various different input fields. A lot of them can benefit from proper autocomplete values, if the name attribute is not eligible.
  • People with motor disabilities benefit from not needing to manually input forms every time.

So, given that almost all browsers have settings to disable these features, it might be okay to not always use autocomplete="off". But, if there are fields that are essentially super sensitive, that you would never want the browser to save information about (e.g., government id, one time pin, credit card security code, etc.), you should use autocomplete="off" on the individual fields instead of the entire <form>. Even if you really really think that the entire form is super sensitive and you need to apply autocomplete="off" on the entire <form> element to protect your user's privacy, you should still at least use autofill detail tokens for the individual fields. This will ensure that the browser doesn't remember the data entered or suggest autofills, but will still help assistive technologies to programmatically determine the purpose of the fields.

Recommended further readings:

May 08, 2022 10:30 AM

Network operations

what is network?

A network is a group of computers and computing devices connected together through communication channels, such as cables or wireless media. The computers connected over a network may be located in the same geographical area or spread across the world. The Internet is the largest network in the world and can be called "the network of networks".

ip address

Devices attached to a network must have at least one unique network address identifier known as the IP (Internet Protocol) address. The address is essential for routing packets of information through the network. Exchanging information across the network requires using streams of small packets, each of which contains a piece of the information going from one machine to another. These packets contain data buffers, together with headers which contain information about where the packet is going to and coming from, and where it fits in the sequence of packets that constitute the stream. Networking protocols and software are rather complicated due to the diversity of machines and operating systems they must deal with, as well as the fact that even very old standards must be supported.

IPV4 and IPV6

There are two different types of IP addresses available: IPv4 (version 4) and IPv6 (version 6). IPv4 is older and by far the more widely used, while IPv6 is newer and is designed to get past limitations inherent in the older standard and furnish many more possible addresses.

IPv4 uses 32-bits for addresses; there are only 4.3 billion unique addresses available. Furthermore, many addresses are allotted and reserved, but not actually used. IPv4 is considered inadequate for meeting future needs because the number of devices available on the global network has increased enormously in recent years.

IPv6 uses 128-bits for addresses; this allows for 3.4 X 1038 unique addresses. If you have a larger network of computers and want to add more, you may want to move to IPv6, because it provides more unique addresses. However, it can be complex to migrate to IPv6; the two protocols do not always inter-operate well. Thus, moving equipment and addresses to IPv6 requires significant effort and has not been quite as fast as was originally intended. We will discuss IPv4 more than IPv6 as you are more likely to deal with it.

One reason IPv4 has not disappeared is there are ways to effectively make many more addresses available by methods such as NAT (Network Address Translation). NAT enables sharing one IP address among many locally connected computers, each of which has a unique address only seen on the local network. While this is used in organizational settings, it also used in simple home networks. For example, if you have a router hooked up to your Internet Provider (such as a cable system) it gives you one externally visible address, but issues each device in your home an individual local address.

decoding IPv4

A 32-bit IPv4 address is divided into four 8-bit sections called octets.

Example: IP address → 172 . 16 . 31 . 46 Bit format → 10101100.00010000.00011111.00101110

NOTE: Octet is just another word for byte.

Network addresses are divided into five classes: A, B, C, D and E. Classes A, B and C are classified into two parts: Network addresses (Net ID) and Host address (Host ID). The Net ID is used to identify the network, while the Host ID is used to identify a host in the network. Class D is used for special multicast applications (information is broadcast to multiple computers simultaneously) and Class E is reserved for future use.

  • Class A network address – Class A addresses use the first octet of an IP address as their Net ID and use the other three octets as the Host ID. The first bit of the first octet is always set to zero. So you can use only 7-bits for unique network numbers. As a result, there are a maximum of 126 Class A networks available (the addresses 0000000 and 1111111 are reserved). Not surprisingly, this was only feasible when there were very few unique networks with large numbers of hosts. As the use of the Internet expanded, Classes B and C were added in order to accommodate the growing demand for independent networks. Each Class A network can have up to 16.7 million unique hosts on its network. The range of host address is from 1.0.0.0 to 127.255.255.255.

  • Class b network address – Class B addresses use the first two octets of the IP address as their Net ID and the last two octets as the Host ID. The first two bits of the first octet are always set to binary 10, so there are a maximum of 16,384 (14-bits) Class B networks. The first octet of a Class B address has values from 128 to 191. The introduction of Class B networks expanded the number of networks but it soon became clear that a further level would be needed. Each Class B network can support a maximum of 65,536 unique hosts on its network. The range of host addresses is from 128.0.0.0 to 191.255.255.255.

  • Class C network address – Class C addresses use the first three octets of the IP address as their Net ID and the last octet as their Host ID. The first three bits of the first octet are set to binary 110, so almost 2.1 million (21-bits) Class C networks are available. The first octet of a Class C address has values from 192 to 223. These are most common for smaller networks which don’t have many unique hosts. Each Class C network can support up to 256 (8-bits) unique hosts. The range of host addresses is from 192.0.0.0 to 223.255.255.255.

what is a name resolution?

Name Resolution is used to convert numerical IP address values into a human-readable format known as the hostname. For example, 104.95.85.15 is the numerical IP address that refers to the hostname whitehouse.gov. Hostnames are much easier to remember!

Given an IP address, one can obtain its corresponding hostname. Accessing the machine over the network becomes easier when one can type the hostname instead of the IP address.

Then comes the network configuration files and these are essential to ensure that interfaces function correctly. They are located in the /etc directory tree. However, the exact files used have historically been dependent on the particular Linux distribution and version being used.

For Debian family configurations, the basic network configuration files could be found under /etc/network/, while for Red Hat and SUSE family systems one needed to inspect /etc/sysconfig/network.

Network interfaces are a connection channel between a device and a network. Physically, network interfaces can proceed through a network interface card (NIC), or can be more abstractly implemented as software. You can have multiple network interfaces operating at once. Specific interfaces can be brought up (activated) or brought down (de-activated) at any time.

A network requires the connection of many nodes. Data moves from source to destination by passing through a series of routers and potentially across multiple networks. Servers maintain routing tables containing the addresses of each node in the network. The IP routing protocols enable routers to build up a forwarding table that correlates final destinations with the next hop addresses.

Let’s learn about more networking tools like wget and curl. Sometimes, you need to download files and information, but a browser is not the best choice, either because you want to download multiple files and/or directories, or you want to perform the action from a command line or a script. wget is a command line utility that can capably handle these kinds of downloads. Whereas curl is used to obtain information about any specific URL.

File Transfer Protocol(FTP)

File Transfer Protocol (FTP) is a well-known and popular method for transferring files between computers using the Internet. This method is built on a client-server model. FTP can be used within a browser or with stand-alone client programs. FTP is one of the oldest methods of network data transfer, dating back to the early 1970s.

Secure Shell(SSH)

Secure Shell (SSH) is a cryptographic network protocol used for secure data communication. It is also used for remote services and other secure services between two devices on the network and is very useful for administering systems which are not easily available to physically work on, but to which you have remote access.

by climoiselle at April 12, 2022 02:22 PM

Subscriptions

Planetorium