Planet dgplug

August 19, 2018

Kushal Das

Aadhaar, the mass surveillance system

If you are following me on Twitter, you have already seen a lot of (re)tweets related to Aadhaar. For the people first time hearing this term, it is a 12 digit unique identification number provided by the Unique Identification Authority of India (UIDAI). It is also the world’s largest bio-metric ID system. It is supposed to be a voluntary service.

From the very beginning, this project tried to hide the details from the Indian citizens. Let it be privacy advocates or security researchers or human rights activists, everyone predicted that this will become a monster, a mass surveillance system, a tool of choice of the power hungry dictators.

Like any other complex system, the majority of the people only see the advertisements from the government and completely miss all the problems and horror stories this project is creating. Here are a few links below for the interested people to read.

Neither my wife, nor our daughter has an Aadhaar (I also don’t have one), that means Py (our daughter) did not get admission to any school last year.

Whenever security researchers or journalists tried to report on the project, the UIDAI tried to hide behind denials and police complaints against the journalists or researchers. There are various reports on how one can get access (both read/write) to the actual production database with as little as $10-30. We now have examples of terrorist organizations having access to the same database. The UIDAI kept telling how this is an unhackable technology and for security they have a 13 feet wall outside of the data center which in turn will keep all hackers away.

They have already build 360 degree databases on top of Aadhaar, and now they are trying to link DNA to the same system.

The current government of India tried their level best to argue in the Supreme Court of India to tell that Indians don’t have any rights to privacy. But, thankfully they failed in this effort, and the Supreme Court ruled privacy as a fundamental right. We are now waiting for the judgment on the Aadhaar (which will hopefully come out in the next few weeks).

Meanwhile, the evil nexus is pushing down Aadhaar to the throats of the Indian citizens and Pakistani spies and gods.

A few days ago, in an event in Jaipur, they asked Edward Snowden the following question.

How big of an issue is privacy?

The answer started with from where that argument comes from.

The answer is that Nazi Germany. The nazi minister of propaganda Joseph Goebbels did this. Because he was trying to change the conversation away from “What are your rights?” and “What evidences must the government show?” to violet them, to intrude into your private life and instead said “Why do you need your rights?”, “How can you justify your rights?”, “Isn’t strange that you are invoking your rights? Isn’t that unusual?”. But, in a free society this is the opposite of the way it is supposed to work. We don’t need to explain why you have a right. You don’t need to explain why it is valuable, why you need it. It is for the government to explain why you don’t deserve it. They go to a court, they show that you are a criminal. This is increasingly falling out of favor, because the governments and companies think that it is inefficient. It is too much work. Life would be easier, life would be more convenient for them, life would be more profitable for them if we didn’t have any rights at all.

But, privacy isn’t about something to hide, privacy is about something to protect. And that is the very concept of liberty. It is the idea that there can be some part of you, of your life, of your ideas that belong to you, not to society. And you get to make the decision about who you share that with. -- Edward Snowden

Why are we reading this in your blog?

This might a question for many of you. Why are reading this in a blog post or in a planet? Because we, the people with the knowledge of technology are also part of these evil plans. We now know about many private companies taking part with their local government to build 360 degree profiles, to track the citizens and to run the mass surveillance systems. For example, related to Aadhaar, for the last 4 years, Google silently pushed the Aadhaar support phone number (which now UIDAI is trying to stay away from) to every Google Android phone in India. When they got caught red handed, they claimed that they did it inadvertently. Finacle software by Infosys denies creation of bank accounts without Aadhaar. Microsoft is working to link Skype with Aadhaar. Bill Gates is trying to push the idea that Aadhaar is all good, and does not have any issues.

What can you do?

You can start by educating yourself first. Read more about the technologies which controls our lives. Have doubt about the things and try to understand how they actually work. Write about them, ask questions to the people in power. Talk about the issues to your friends and family.

This is not gong to be an easy task, but, we all should keep fighting back to make sure of a better future for our next generation.

by Kushal Das at August 19, 2018 06:33 AM

August 16, 2018

Robin Schubert

Sync data from mobile phone with rsync

I'm quite fond about having a google-free phone. I run a lineage OS without g-apps, use my own caldav and carddav server to sync my contacts and calendars, can find every software I need in the F-Droid app and live happy ever since.

However, what has bugged me was synchronization of personal photos and videos. I did several attempts to solve that using webdav (owncloud/nextcloud), but my phone was not happy with the available clients and the battery would not last very long.

I now have a setup that allows me secure synchronization that I'm quite happy with, using rsync from my phone to server. It's simple and slim and feels just too familiar to be a bad idea.

Termux - Terminal Emulator and Linux Environment

Termux is what I've been looking for on Android for a long time. Native Linux on my phone would be priceless, but the ease of use and installation makes Termux a more than good compromise. It comes with a nifty packet manager and allows me to run small Python and R scripts including web-scraping etc. on the fly.

You can get Termux for free in the Google Playstore, the extensions however will cost around 2\$ (which is okay, you'll support some really good work!). Using the F-Droid app you can install the extensions for free.

I also installed the extension Termux:Widget that allows to start scripts via single tap from home screen - a very handy addition, I use it to reboot my raspberry pi or wake-on-lan my computer that's in the basement.

Set up rsync

To set up Termux for this task, install rsync first:

pkg install rsync

Generate a key-pair (I just use the defaults):

ssh-keygen

Per default this will create the files id_rsa and id_rsa.pub in your ~/.ssh directory. Make sure to put the contents of id_rsa.pub into the ~/.ssh/authorized_keys file on your target server.

And set up the Termux storage:

termux-setup-storage

Create a folder for the Termux:Widget shortcuts and create the script:

mkdir .shortcuts
echo "rsync -h -r --info=progress2 ~/storage/dcim/* robin@<myserver>:/<path_to_my_rsync_folder>" > .shortcuts/sync

Conclusion

With the Termux widget this simple setup allows stable and quick sync of my data that leaves me with a few worries less. In fact I set this up on my wife's phone as well, since it's working so nicely.

by Robin Schubert at August 16, 2018 12:00 AM

August 15, 2018

Farhaan Bukhsh

File Indexing In Golang

I have been working on a pet project to write a File Indexer, which is a utility that helps me to search a directory for a given word or phrase.

The motivation behind to build this utility was so that we could search the chat log files for dgplug. We have a lot of online classes and guest session and at time we just remember the name or a phrase used in the class, backtracking the files using these are not possible as of now. I thought I will give stab at this problem and since I am trying to learn golang I implemented my solution in it. I implemented this solution over a span of two weeks where I spent time to upskill on certain aspects and also to come up with a clean solution.

Exploration

This started with exploring a similar solution because why not? It is always better to improve an existing solution than to write your own. I didn’t find any which suits our need so I ended up writing my own. The exploration to find a solution led me to discover few of the libraries that can be useful to us. I discovered fulltext and Bleve.

I found bleve to have better documentation and really beautiful thought behind it. They have a very minimal yet effective thought process with which they designed the library. At the end of it I was sure I am going to use it and there is no going back.

Working On the Solution

After all the exploration I tried to break the problem I have into smaller problems and then to follow and solve each one of them. So first one was to understand how bleve works, I found out that bleve creates an index first for which we need to give it the list of files. The way the index is formed is basically a map structure behind the back where you give the id and content to be indexed. So what could be a unique constraint for a file in a filesystem? The path of the file I used it as the id to my structure and the content of my file as the value.

After figuring this out I wrote a function which takes the directory as the argument and gives back the path of each file and the content of each file. After few iteration of improvement it diverged into two functions one is responsible to get the path of all the files and the other just reads the file and get the content out.

func fileNameContentMap() []FileIndexer {
	var ROOTPATH = config.RootDirectory
	var files []string
	var filesIndex FileIndexer
	var fileIndexer []FileIndexer

	err := filepath.Walk(ROOTPATH, func(path string, info os.FileInfo, err error) error {
		if !info.IsDir() {
			files = append(files, path)
		}
		return nil
	})
	checkerr(err)
	for _, filename := range files {
		content := getContent(filename)
		filesIndex = FileIndexer{Filename: filename, FileContent: content}
		fileIndexer = append(fileIndexer, filesIndex)
	}
	return fileIndexer
}

This forms a struct which stores the name of the file and the content of the file. And since I can have many files I need to have a array of the struct. This is how the transition of moving from a simple data structure evolves into complex one.

Now I have the utility of getting all files, getting content of the file and making an index.

This forms a crucial step of what we are going to achieve next.

How Do I Search?

Now since I am able to do the part which prepares my data the next logical stem was to retrieve the searched results. The way we search something is by passing a query so I duck-typed a function which accepts a string and then went on a spree of documentation to find out how do I search in bleve, I found a simple implementation which returns me the id of the file which is the path and match score.

 func searchResults(indexFilename string, searchWord string) *bleve.SearchResult {
	index, _ := bleve.Open(indexFilename)
	defer index.Close()
	query := bleve.NewQueryStringQuery(searchWord)
	searchRequest := bleve.NewSearchRequest(query)
	searchResult, _ := index.Search(searchRequest)
	return searchResult
}

This function opens the index and search for the term and returns back the information.

Let’s Serve It

After all that is done I need to have a service which does this on demand so I wrote a simple API server which has two endpoints index and search.  The way mux works is you give the enpoint to the handler and which function has to be mapped with it. I had to restructure the code in order to make this work. I faced a very crazy bug which when I narrowed down came to a point of a memory leak and yes it was because I left the file read stream open so remember when you Open always defer Close.

I used Postman to heavily test it and it war returning me good responses. A dummy response looks like this:

 [{"index":"irclogs.bleve","id":"logs/some/hey.txt","score":0.6912244671221862,"sort":["_score"]}]

Missing Parts?

The missing part was I didn’t use any dependency manager which Kushal pointed out to me so I landed up using dep to do this for me. The next one was the best problem and that is how do auto-index a file, which suppose my service is running and I added one more file to the directory, this files content wouldn’t come up in the search because the indexer has not run on it. This was a beautiful problem I tried to approach it from many different angles first I thought I would re-run the service every time I add a file but that’s not a graceful solution then I thought I would write a cron which will ping /index at regular interval and yet again that was a bad option, finally I thought if I could detect the change in file. This led me to explore gin, modd and fresh.

Gin was not very compatible with mux so didn’t use it, modd was very nice but I need to kill the server to restart it since two service cannot run on a single port and every time I kill that service I kill the modd daemon too so that possibility also got ruled out.

Finally the best solution was fresh although I had to write a custom config file to suite the requirement this still has issues with nested repository indexing which I am thinking how to figure out.

What’s Next?

This project is yet to be containerised and there are missing test cases so I would be working on them as and when I get time.

I have learnt a lot of new things about filesystem and how it works because of this project, this helped me appreciate a lot of golang concepts and made me realise the power of static typing.

If you are interested you are welcome to contribute to file-indexer. Feel free to ping me.

Till then, Happy Hacking!

 

by fardroid23 at August 15, 2018 02:49 PM

Jason Braganza (Work)

Book Review – i want 2 do project. tell me wat 2 do

Click me to buy!


TL;DR? It’s awesome. Buy it right now.

I was looking to dip my toes into some sort of structured help with the summer training and open source in general, because while I knew what I wanted, I just didn’t know how to go about it.

And then I realised that one of our mentors had actually gone and written a whole book on the how to. So, I bought the paperback. The binding is really good, the paper really nice (unlike other tech books I’ve read) and the words large enough to read. I expect to get a lot of use, out of the book.

And lot of use is right. While it’s a slim volume and a pretty quick read, the book is pretty dense when it comes to the wisdom it imparts.

The book has a simple (yet substantial to execute) premise. You’ve just tipped your toe into programming, or you’ve learnt a new language, or you’ve probably written a few programs or maybe you’re just brand new. You want to explore the vast thrilling world that is Open Source Software. What now?

“i want 2 do project. tell me wat 2 do.” answers the “what now” in painstaking detail.

From communication (Mailing List Guidelines) to the importance of focus (Attention to Detail) to working with mentors (the Project chapters) to the tools (Methodology & tools) to the importance of sharpening the saw (Reading …) and finally the importance of your environment (Sustenance), the book covers the entire gamut that a student or a novice programmer with open source would go through.

Shakthi writes like he speaks; pithily, concisely with the weight of his experience behind his words.

The book is chockfull of quotes (from the Lady Lovelace to Menaechmus to Taleb) that lend heft to the chapters. The references at the end of each chapter will probably keep me busy for the next few months.

The book’ll save you enormous amounts of time and heartache, in your journey, were you to heed its advice. It’s that good.

by Mario Jason Braganza at August 15, 2018 10:55 AM

August 13, 2018

Anwesha Das

Twitter from command line

Since the time I have started writing code, the toughest job for me is to be in peace with the black and green screen, the terminal. As it is being “The Thing” which keeps my lovely husband (ah ha, really?) from me. So as an initiative of my “peace making process” I have started doing my day mostly on this boring screen. A part of that is me trying to do twitter from the command line. Thus, let us make the (boring) terminal interesting.

To reach the aim I needed a Python module to access the Twitter api. I used a module called python-twitter. Click is a Python package to create command line applications. I used it to have a better command line interface. I used Microsoft Visual Studio Code as my primary editor. Like all previous projects, I am leaned on Jupyter Notebook to try out code snippets. I used Pipenv for the first time here.

import sys
import twitter
import json
import click

After importing the modules (as mentioned above) required, the job was to create the boolean command line flags through click, so,

@click.command()
@click.option("--tweet", "-t", is_flag=True, help="Does tweet.")
@click.option("--timeline", "-n", is_flag=True, help="Shows user's timeline.")
@click.option(
   "--directmessage", "-m", is_flag=True, help="Shows user's direct messages."
)

I learnt click from this blogpost, it was really helpful.

I wrote a config.json file where I have the required authentication details, such as consumer key, consumer secret, access token key, access token secret and user id. I got them from my Twitter developer account. In the account you have to set your access level as “Read, Write and Direct Messages”. I am creating an object of the twitter.Api class. I am passing different arguments, tweet, timeline and direct message to subsequently do tweet, see my timeline and get to see my direct messages, in the command line.

I used Black to format my code. Formatting of the code makes the code more readable and easy to review.

The next and final job was to upload it on [PyPI] using twine. For this I followed a blogpost I did earlier I The source code of the project is available on my github.

If you notice, I have used many things for the first time in this small learning effort. Projects of this size are really helpful to learn new things.

Happy tweeting (from the command line).

by Anwesha Das at August 13, 2018 06:07 PM

August 12, 2018

Jason Braganza (Work)

Programming, Day 55

Updates:

  • Much better at gtypist lesson Q1
  • Also automated resizing and compressing my gtypist screenshots via an Automator folder action
  • Done with Chapter 9 of the Lutz Book. I now know of tuples and strings

Automator Screenshot

gtypist screenshot gt16

by Mario Jason Braganza at August 12, 2018 06:48 AM

August 07, 2018

Kushal Das

August 06, 2018

Praveen Kumar

[Event Report] DevConf India-2018

This week I got a chance to attend Devconf India which held at Christ University Bangalore. As per stats, there were around ~1323 attendee and 110 speakers. There were around 14 parallel tracks (Agile, Blockchain, Cloud and Container, Community, Design, Developer Tools, DevOps, IOT, Machine Learning, Middleware, Platform, QE, Security, Storage) and BOFs, workshops so pretty much completely packed schedule.

Day 1 started with a dance performance by university students and after that the keynote by Ric Wheeler about "Open source is better for companies/businesses, communities and developers" and he talked about back in the day how things used to happen and how far we have come now in terms of software businesses. He also talked about the why nowadays most of the organization are moving toward Open Source.

After the keynote, I went to booth area where most of the communities booth were present (Fedora, Foreman, OpenShift, Sliverblue, Women who code, RDO, Mozilla, ElasticSearch, devopedia, Ansible) and spent some time answering some OpenShift related queries by participants.

After the tea break, I went for my workshop, which was about “Getting start with OpenShift using MiniShift”. I shared the stage with Budhram and we had Anjan Nath, Jatan dedicated volunteer for this workshop (Thanks guys.). We got around 38 participants and everything went smoothly except some of the issues with the windows users due to expected reasons (no admin permission on system and C:\ drive was not used). We explained the basic navigation of OpenShift web UI, how to use Openshift client (oc) tool to connect to a cluster, how to deploy an application. We also covered some of the basic of kuberentes resources.

After Lunch I spent most of the time roaming around the booth area, talking to different people. Met with a lot of old friends as usual.

Day 1 end with the keynote by Karanbir Singh (kbsingh) about “Open Source won”, he talked about the phase when Indian Linux group was started and now it’s very hard to find any organization which is not consuming an opensource project.

Some of us went outside for dinner and back early to hotel to get some rest  :)

Day 2 started with the keynote by Christian Heimes about “Lessons about security”, he talked about a bit of Roman culture and how information saved in old days. He talked about recent hardware vulnerabilities and why you should always keep your software up to date especially the security fixes.

After the tea break, I went to volunteer Baiju workshop, which was about “ RESTful API Development using Go”.  There were around 40 participants and most of them were familiar with the golang. Baiju started with a simple http server and then how you can build up your route on top of it without using any kind of framework to teach the logic what happens behind the scene. He then introduced a module `mux` which is used to filter out the request type (GET, DELETE, POST ...etc.) so that if a route only serves one request type then other should be ignored for it. He also talked about `negroni` for middleware use case.


 Then I attended a talk by Graham about “Data science in the cloud using Python”, he talked about how you can deploy Jupyter notebook on Openshift for personal use or even share it to other colleagues/friends in the organization.



Day 2 ended with thank you note for all the organizers, volunteers, college faculty members, housekeeping staff and whoever involved making this conference success.


Kudos to organizers and volunteers to pull off such an amazing conference.






by Praveen Kumar (noreply@blogger.com) at August 06, 2018 06:28 PM

July 31, 2018

Jason Braganza (Personal)

Daily Writing, 76 – Mother and Son

_MG_3872


Mothers yielding Bibles, contemplating smearing the blood of lamb chops over her doorway.
Anything to keep her son alive another day.

Antonia Perdu

by Mario Jason Braganza at July 31, 2018 02:51 AM

July 30, 2018

Jason Braganza (Personal)

The Personal MBA

tpmba


There is absolutely nothing I can say about the Personal MBA that hasn’t been said.

I cheat and present Derek Sivers’ notes on the book.

But here’s his point about the book …

Wow. A masterpiece. This is now the one “START HERE” book I'll be recommending to everybody interested in business. An amazing overview of everything you need to know. Covers all the basics, minus buzz-words and fluff. Look at my notes for an example, but read the whole book. One of the most inspiring things I've read in years.
Want proof? I asked the author to be my coach/mentor afterwards. It's that good.

My main regret? That the book was on my shelf nearly three years before I picked it up. Talk about lost time.
And as someone who’s helped friends with their MBAs and helped his wife with her DBA, I can absolutely attest that the Personal MBA, does what it claims to do.
It’s world class education for less than 500 bucks.

I’m also a bit jealous and awed. Josh read and synthesised and made notes on so many books and created a smashingly amazing syntopical work.
Which is what I do so agonisingly slowly here :P

Short, pithy notes and chapters, keep you engrossed and the book is pretty fast paced and engaging for the enormous breadth of knowledge it seeks to distill within its 500 pages.

Personally biased, I loved the chapters on antifragility, optionality and tinkering. Those are Taleb terms. Josh calls them Resilience, Fail Safes and The Experimental Mindset.

But the whole book is awesome!
It’s my new quake book.

I learnt so much and I know I will learn much more as I revisit it again and again.
I’ll close with two things. The short B. C. Forbes passage (all emphases, mine) that Josh closes the book with, and a short audio introduction below.

Your success depends on you.
Your happiness depends on you.
You have to steer your own course.
You have to shape your own fortune.
You have to educate yourself.
You have to do your own thinking.
You have to live with your own conscience.
Your mind is yours and can be used only by you.
You come into this world alone.
You go to the grave alone.
You are alone with your inner thoughts during the journey between.
You make your own decisions.
You must abide by the consequences of your acts …
You alone can regulate your habits and make or unmake your health. You alone can assimilate things mental and things material …
You have to do your own assimilation all through life.
You can be taught by a teacher, but you have to imbibe the knowledge. He cannot transfuse it into your brain.
You alone can control your mind cells and your brain cells.
You may have spread before you the wisdom of the ages, but unless you assimilate it you derive no benefit from it; no one can force it into your cranium.
You alone can move your own legs.
You alone can move your own arms
You alone can control your own muscles.
You must stand on your feet, physically and metaphorically.
You must take your own steps.
Your parents cannot enter into your skin, take control of your mental and physical machinery, and make something of you.
You cannot fight your son’s battles; that he must do for himself.
You have to be captain of your own destiny.
You have to see through your own eyes.
You have to use your own ears.
You have to master your own faculties.
You have to solve your own problems.
You have to form your own ideals.
You have to create your own ideas.
You must choose your own speech.
You must govern your own tongue.
Your real life is your thoughts.
Your thoughts are your own making.
Your character is your own handiwork.
You alone can select the materials that go into it.
You alone can reject what is not fit to go into it.
You are the creator of your own personality.
You can be disgraced by no man’s hand but your own.
You can be elevated and sustained by no man but yourself.
You have to write your own record.
You have to build your own monument—or dig your own pit. Which are you doing?


by Mario Jason Braganza at July 30, 2018 08:18 AM

July 25, 2018

Shakthi Kannan

Emacs Meetup (virtual), February-March, 2018

I have been trying to have regular monthly Emacs meetups online, starting from 2018.

The following are the meeting minutes and notes from the Jitsi meetings held online in the months of February and March 2018.

February 2018

The February 2018 meetup was primarily focussed on using Emacs for publishing.

Using Emacs with Hakyll to build websites and resumes were discussed. It is also possible to output in muliple formats (PDF, YAML, text) from the same source.

I shared my shakthimaan-blog sources that uses Hakyll to generate the web site. We also discussed the advantanges of using static site generators, especially when you have large user traffic to your web site.

I had created the xetex-book-template, for creating multilingual book PDFs. The required supported features in the same, and its usage were discussed in the meeting.

Kushal Das asked about keyboards use in Emacs, and in particular for Control and Alt, as he was using the Kinesis. The best Emacs keyboard options available at http://ergoemacs.org/emacs/emacs_best_keyboard.html was shared. The advantage of using thumb keys for Control and Alt was obvious with the bowl shaped keyboard layout in Kinesis.

We also talked about the Emacs Web Browser (eww), and suggested the use of mu4e for checking e-mails with Emacs.

March 2018

At the beginning of the meetup, the participants asked if there was a live stream available, but, we are not doing so at this point in time with Jitsi.

For drawing inside Emacs, I had suggested ASCII art using Artist Mode.

Emacs has support for rendering PDFs inside, as the following old blog post shows http://www.idryman.org/blog/2013/05/20/emacs-and-pdf/. nnnick then had a question on “Why Emacs?”:

nnnick 19:23:00
Can you tell me briefly why emacs is preferred over other text editors

The discussion then moved to the customization features and extensibility of Emacs that makes it well suited for your needs.

For people who want to start with a basic configuration for Emacs, the following repository was suggested https://github.com/technomancy/emacs-starter-kit.

I had also shared links on using Org mode and scrum-mode for project management:

I shared my Cask setup link https://gitlab.com/shakthimaan/cask-dot-emacs and mentioned that with a rolling distribution like Parabola GNU/Linux-libre, it was quite easy to re-run install.sh for newer Emacs versions, and get a consistent setup.

In order to SSH into local or remote systems (VMs), Tramp mode was suggested.

I also shared my presentation on “Literate DevOps” inspired by Howardism https://gitlab.com/shakthimaan/literate-devops-using-gnu-emacs/blob/master/literate-devops.org.

Org entries can also be used to keep track of personal journal entries. The Date Trees are helpful in this context as shown in the following web page http://members.optusnet.com.au/~charles57/GTD/datetree.html.

Tejas asked about using Org files for executing code in different programming languages. This can be done using Org Babel, and the same was discussed.

Tejas 19:38:23
can org mode files be used to keep executable code in other languages apart from elisp?

mbuf 19:38:42
Yes

mbuf 19:39:15
https://orgmode.org/worg/org-contrib/babel/languages.html

The other useful tools that were discussed for productivity are given below:

Tejas said that he uses perspective-el, but it does not have the save option - just separate workspaces to switch between them - for different projects basically.

A screenshot of the session in progress is shown below:

Emacs APAC March 2018 meetup

Arun also suggested using Try for trying out Emacs packages without installation, and cycle-resize package for managing windows.

Tejas and Arun then shared their Emacs configuration files.

Arun 19:51:37
https://github.com/aruntakkar/emacs.d

Tejas 19:51:56
https://github.com/tejasbubane/dotemacs

We closed the session with few references on learning Emacs Lisp:

Tejas 20:02:42
before closing off, can you guys quickly point me to some resources for learning elisp?

mbuf 20:03:59
Writing GNU Emacs Extensions.

mbuf 20:04:10
Tejas: Emacs Lisp manual

Tejas 20:04:35
Thanks 

July 25, 2018 04:25 PM

July 21, 2018

Farhaan Bukhsh

Template Method Design Pattern

This is a continuation of the design pattern series.

I had blogged about Singleton once, when I was using it very frequently. This blog post is about the use of the Template Design Pattern. So let’s discuss the pattern and then we can dive into the code and its implementation and see a couple of use cases.

The Template Method Design Pattern is a actually a pattern to follow when there are a series of steps, which need to be followed in a particular order. Well, the next question that arises is, “Isn’t every program a series of steps that has to be followed in a particular order?”

The answer is Yes!

This pattern diverges when it becomes a series of functions that has to be executed in the given order. As the name suggests it is a Template Method Design pattern, with stress on the word method, because that is what makes it a different ball game all together.

Let’s understand this with an example of Eating in a Buffet. Most of us have follow a set of similar specific steps, when eating at a Buffet. We all go for the starters first, followed by main course and then finally, dessert. (Unless it is Barbeque Nation then it’s starters, starters and starters :))

So this is kind of a template for everyone Starters --> Main course --> Desserts.

Keep in mind that content in each category can be different depending on the person but the order doesn’t change which gives a way to have a template in the code. The primary use of any design pattern is to reduce duplicate code or solve a specific problem. Here this concept solves the problem of code duplication.

The concept of Template Method Design Pattern depends on, or rather  is very tightly coupled with Abstract Classes. Abstract Classes themselves are a template for derived classes to follow but Template Design Pattern takes it one notch higher, where you have a template in a template. Here’s an example of a BuffetHogger class.

from abc import ABC, abstractmethod

class BuffetHogger(ABC):

    @abstractmethod
    def starter_hogging(self):
        pass

    @abstractmethod
    def main_course_hogging(self):
        pass

    @abstractmethod
    def dessert_hogging(self):
        pass

    def template_hogging(self):
        self.starter_hogging()
        self.main_course_hogging()
        self.dessert_hogging()

So if you see here the starter_hogging, main_course_hogging and dessert_hogging are abstract class that means base class has to implement it while template_hogging uses these methods and will be same for all base class.

Let’s have a Farhaan class who is a BuffetHogger and see how it goes.

class Farhaan(BuffetHogger):
    def starter_hogging(self):
        print("Eat Chicken Tikka")
        print("Eat Kalmi Kebab")

    def __call__(self):
        self.template_hogging()

    def main_course_hogging(self):
        print("Eat Biryani")

    def dessert_hogging(self):
        print("Eat Phirni")
Now you can spawn as many  BuffetHogger  classes as you want, and they’ll all have the same way of hogging. That’s how we solve the problem of code duplication
Hope this post inspires you to use this pattern in your code too.
Happy Hacking!

by fardroid23 at July 21, 2018 03:27 PM

July 20, 2018

Shakthi Kannan

Elixir Workshop: MVJ College of Engineering, Bengaluru

I had organized a hands-on scripting workshop using the Elixir programming language for the Computer Science and Engineering department, MVJ College of Engineering, Whitefield, Bengaluru on May 5, 2018.

Elixir scripting session

The department were interested in organizing a scripting workshop, and I felt using a new programming language like Elixir with the power of the Erlang Virtual Machine (VM) will be a good choice. The syntax and semantics of the Elixir language were discussed along with the following topics:

  • Basic types
  • Basic operators
  • Pattern matching
  • case, cond and if
  • Binaries, strings and char lists
  • Keywords and maps
  • Modules and functions

Students had setup Erlang and Elixir on their laptops, and tried the code snippets in the Elixir interpreter. The complete set of examples are available in the following repo:

https://gitlab.com/shakthimaan/elixir-scripting-workshop

A group photo was taken at the end of the workshop.

Elixir scripting session

I would like to thank Prof. Karthik Myilvahanan J for working with me in organizing this workshop.

July 20, 2018 01:00 PM

July 05, 2018

Robin Schubert

How to browse blocked sites with Adblocker

People that are browsing the web with ad-blockers often find themselves on websites that gray out, block scrolling and show a modal dialog that kindly suggests to switch off the ad-blocker or to whitelist that particular page.

Here's a little work-around how you can continue browsing most of those sites without whitelisting the page or turning off the ad-blocker, by live-editing the HTML.

So whenever you see a banner like this - I've come across this a dozen times now, this is an example from a German online news magazine - open the Web Inspector. There are multiple ways to do this; in Firefox you can open Tools -> Web Developer -> Inspector, or using Chromium it would be Menu -> More tools -> Developer tools, or just hit F12.

I like right-clicking the object I want to inspect and select Inspect element.

So when I inspect the modal dialog and follow the DOM a bit upwards, I find the corresponding <div> tag that describes the dialog. Note the style="display: block;" css rule.

Since we don't want to see this dialog at all, right-click that html element and simply delete the whole node.

Schlurps and the dialog is gone. However, we still have that gray veil. In this example, the responsible <div> tag is the one just right above the previous modal dialog tag. Again we find the style="display: block;" rule and again we simply delete that node.

Finally the website looks almost normal. But shoot! scrolling is deactivated. If you happen to use Vim keybindings for navigating in your browser, you might not even notice. However, to be able to scroll with your mouse or arrow keys, find the <body> tag way up the DOM.

You may have guessed it: Right we're not going to delete the <body> tag. Notice the css lines "overflow-y: hidden; height: 911px;". This hides the scroll bar and sets a fixed height to what seems to be my browser window height. You can either delete that css, or - if you want to - modify to something like "overflow-y: auto; height: 100%;" and you should be browsing that site without ads and annoying modals.

by Robin Schubert at July 05, 2018 12:00 AM

May 25, 2018

Anwesha Das

How to use Let’s Encrypt with nginx and docker

In my last blog post, I shared the story on how did I set my server up. I mentioned that I’d be writing about getting SSL certificates, so here you go.

When I started working on a remote server somewhere out there in the globe and letting that come in into my private space, (my home machine) I realised I needed to be much more careful, and secure.

The first step to attain security was to set up a firewall to control unwanted incoming intrusions.
The next step was to create a reverse proxy in nginx :

Let us assume we’re running a docker container, a CentOS 7 host, using the latest ghost image. So first, one has to install docker, nginx and start the docker service:

yum install docker nginx epel-release vim -y

Along with docker and nginx we are also installing epel-release from which we will later get Certbot, for the next part of our project and vim if you prefer to.

systemctl start docker

Next I started the docker container, I am using ghost as an example here.

docker run -d --name xyz -p 127.0.0.1:9786:2368 ghost:1.21.4

Running the docker container in background. I am exposing the container’s port 2368 to the port 9786 of the localhost, (using ghost as an example in this case.)


sudo vim /etc/nginx/conf.d/xyz.anweshadas.in.conf

Now we have to set up nginx for the server name xyz.anweshadas.in, in a configuration file named xyz.anweshadas.in.conf. The configuration looks like this


server {
        listen 80;

        server_name xyz.anweshadas.in;

        location / {
                # proxy commands go here as in your port 80 configuration

                proxy_pass http://127.0.0.1:9786/;
                proxy_redirect off;
                proxy_set_header HOST $http_host;
                proxy_set_header X-NginX-Proxy true;
                proxy_set_header X-Real-IP $remote_addr;
                    }
}

In the above mentioned configuration we are receiving the http requests
on port 80. We are forwarding all the requests for xyz.anweshadas.in to the port 9786 of our localhost.

Before we can start nginx, we have to set up a SELinux boolean so that the nginx server can connect to any port on localhost.

setsebool httpd_can_network_connect 1 -P

systemctl start nginx

Now you will be able to see the ghost running at http://xyz.anweshadas.in.

To protect one’s security and privacy in the web sphere it is very important to know that the people or objects one is communicating with, are actually who they claim to be.
In such circumstances, TLS certificates is what we rely on. Let’s Encrypt is one such certificate authority, that provides certificates.

It provides certificates for Transport Layer Security (TLS) encryption via an automated process. Certbot is the client side tool (from the EFF) to get a certificate from Let’s Encrypt.

So we need a https (secure) certificate for our server by installing certbot.
Let’s get started

yum install certbot
mkdir -p /var/www/xyz.anweshadas.in/.well-known

We now need to make a directory named .well-known, in /var/www/xyz.anweshadas.in, where we will get the certificate for validation by Let’s Encrypt certificate.

chcon -R -t httpd_sys_content_t /var/www/xyz.anweshadas.in

This SELinux context of the directory, xyz.anweshadas.in.

Now we need to enable the access of the .well-known directory under our domain, that Let’s Encrypt can verify. The configuration of nginx, is as follows

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
                alias /var/www/xyz.anweshadas.in/.well-known;
        }

        location / {
                  # proxy commands go here as in your port 80 configuration

                  proxy_pass http://127.0.0.1:9786/;
                  proxy_redirect off;
                  proxy_set_header HOST $http_host;
                  proxy_set_header X-NginX-Proxy true;
                  proxy_set_header X-Real-IP $remote_addr;
         }

}
certbot certonly --dry-run --webroot -w /var/www/xyz.anweshadas.in/ -d xyz.anweshadas.in

We are performing a test run of the client, by obtaining the test certificates, through placing files in a webroot, but not actually saving them in the hard drive. To have a dry-run is important because the number of time one can get certificates for a particular domain a limited number of time (20 times in a week). All the subdomains under a particular domain are counted separately. To know more, go to the manual page of Certbot.

certbot certonly --webroot -w /var/www/xyz.anweshadas.in/ -d xyz.anweshadas.in

After running the dry-run successfully, we will rerun the command agian without dry-run to get the actual certificates. In the command we are providing the webroot using -w pointing to /var/www/xyz.anweshadas.in/ directory for the particular domain(-d) named xyz.anweshadas.in.

Let us add some more configuration to nginx, so that we can access the https version of our website.

vim /etc/nginx/conf.d/xyz.anweshadas.in.conf

The configuration looks like:

server {
    listen 443 ssl;

    # if you wish, you can use the below line for listen instead
    # which enables HTTP/2
    # requires nginx version >= 1.9.5
    # listen 443 ssl http2;

    server_name xyz.anweshadas.in;

    ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

    # Uncomment this line only after testing in browsers,
    # as it commits you to continuing to serve your site over HTTPS
    # in future
    # add_header Strict-Transport-Security "max-age=31536000";


    # maintain the .well-known directory alias for renewals
    location /.well-known {

        alias /var/www/xyz.anweshadas.in/.well-known;
    }

    location / {
        # proxy commands go here as in your port 80 configuration

        proxy_pass http://127.0.0.1:9786/;
        proxy_redirect off;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

To view https://xyz.anweshadas.in, reload nginx.

systemctl reload nginx

In case of any error, go to the nginx logs.

If everything works fine, then follow the below configuration.

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
            alias /var/www/xyz.anweshadas.in/.well-known;
        }

        rewrite ^ https://$host$request_uri? ;

}
server {
    listen 443 ssl;

    # if you wish, you can use the below line for listen instead
    # which enables HTTP/2
    # requires nginx version >= 1.9.5
    # listen 443 ssl http2;

    server_name xyz.anweshadas.in;

    ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;


    # Uncomment this line only after testing in browsers,
    # as it commits you to continuing to serve your site over HTTPS
    # in future
    #add_header Strict-Transport-Security "max-age=31536000";


    # maintain the .well-known directory alias for renewals
    location /.well-known {

        alias /var/www/xyz.anweshadas.in/.well-known;
    }

    location / {
    # proxy commands go here as in your port 80 configuration

    proxy_pass http://127.0.0.1:9786/;
    proxy_redirect off;
    proxy_set_header HOST $http_host;
    proxy_set_header X-NginX-Proxy true;
    proxy_set_header X-Real-IP $remote_addr;
    }
}

The final nginx configuration [i.e., the /etc/nginx/conf.d/xyz.anweshadas.in.conf] looks like the following, having the rewrite rule, forwarding all http requests to https. And uncommenting the “Strict-Transport-Security” header.

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
            alias /var/www/xyz.anweshadas.in/.well-known;
         }

        rewrite ^ https://$host$request_uri? ;

}

server {
        listen 443 ssl;

        # if you wish, you can use the below line for listen instead
        # which enables HTTP/2
        # requires nginx version >= 1.9.5
        # listen 443 ssl http2;

        server_name xyz.anweshadas.in;

        ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

        # Turn on OCSP stapling as recommended at
        # https://community.letsencrypt.org/t/integration-guide/13123
        # requires nginx version >= 1.3.7
        ssl_stapling on;
        ssl_stapling_verify on;

        # modern configuration. tweak to your needs.
        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        ssl_prefer_server_ciphers on;


        # Uncomment this line only after testing in browsers,
        # as it commits you to continuing to serve your site over HTTPS
        # in future
        add_header Strict-Transport-Security "max-age=31536000";


        # maintain the .well-known directory alias for renewals
        location /.well-known {

            alias /var/www/xyz.anweshadas.in/.well-known;
    }

        location / {
        # proxy commands go here as in your port 80 configuration

        proxy_pass http://127.0.0.1:9786/;
        proxy_redirect off;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_set_header X-Real-IP $remote_addr;
        }
}

So, now hopefully the website shows the desired content at the correct url now.

For this particular work, I am highly indebted to the Linux for You and Me, book, which actually introduced and made me comfortable with the Linux Command line.

by Anwesha Das at May 25, 2018 05:04 PM