Planet dgplug

January 22, 2018

Shakthi Kannan

Emacs Meetup (virtual), January 15, 2018

I wanted to start the New Year (2018) by organizing an Emacs meetup session in the APAC time zone. Since there are a number of users in different cities, I thought a virtual session will be ideal. An online Google Hangout session was scheduled for Monday, January 15, 2018 at 1000 IST.

Hangout announcement

Although I announced the same only on Twitter and IRC (#emacs on, we had a number of Emacsers join the session. The chat log with the useful web links that were shared are provided below for reference.

We started our discussion on organizing Org files and maintaining TODO lists.

9:45 AM Suraj Ghimire: it is clear and loud :) video quality is good too 
                       yes. is there any session today on hangouts ?
                       wow thats nice thanks you for introducing me to emacs :). I am happy emacs user
                       should we add bhavin and vharsh for more testing
                       oh you already shared :)
9:55 AM Suraj Ghimire:  working on some of my todos

For few Vim users who wanted to try Emacs Org mode, it was suggested to get started with Spacemacs. Other project management and IRC tools with Emacs were also shared:

  HARSH VARDHAN can now join this call.
  HARSH VARDHAN joined group chat.
  Google Apps can now join this call.
  Google Apps joined group chat.
10:05 AM Shakthi Kannan:
                         ERC for IRC chat

10:13 AM Shakthi Kannan:
10:18 AM Suraj Ghimire: I started using emacs after your session on emacs, before that i used to 
                        get scared due to lot of shortcuts. I will work on improvements you told me.
10:19 AM Shakthi Kannan: M - Alt, C - Control

  Google Apps left group chat.
  Google Apps joined group chat.
  Sacha Chua can now join this call.
  Sacha Chua joined group chat.

We then discussed on key bindings, available modes, and reading material to learn and master Emacs:

10:27 AM Shakthi Kannan:
10:31 AM Shakthi Kannan:

  Dhavan Vaidya can now join this call.
  Dhavan Vaidya joined group chat.
  Sacha Chua left group chat.
10:42 AM Shakthi Kannan:

  Rajesh Deo can now join this call.
  Rajesh Deo joined group chat.

Users also wanted to know of language modes for Erlang:

10:52 AM Shakthi Kannan:

  HARSH VARDHAN left group chat.
  Aaron Hall can now join this call.
10:54 AM Shakthi Kannan:

Aaron Hall joined the channel and had few interesting questions. After an hour, we ended the call.

  Aaron Hall joined group chat.
10:54 AM Aaron Hall: hi!
10:54 AM Dhavan Vaidya: hi!
10:55 AM Aaron Hall: This is really cool!

Maikel Yugcha can now join this call.
Maikel Yugcha joined group chat.
10:57 AM Aaron Hall: Anyone here using Emacs as their window manager?
10:57 AM Suraj Ghimire: not yet :)
10:57 AM Shakthi Kannan: I am "mbuf" on IRC.
10:58 AM Aaron Hall: What about on servers? I just played around, but I like tmux for persistence and emacs inside of tmux.
10:59 AM Shakthi Kannan:
11:00 AM Aaron Hall: Is anyone compiling emacs from source?

Zsolt Botykai can now join this call.
Zsolt Botykai joined group chat.
11:00 AM Aaron Hall: yay, me too!

Zsolt Botykai left group chat.
11:00 AM Aaron Hall: it wasn't easy to start, the config options are hard
                     I had trouble especially with my fonts until I got my configure right

Maikel Yugcha left group chat.
11:03 AM Shakthi Kannan:
11:04 AM Aaron Hall anyone using Haskell? With orgmode? I've been having a lot of trouble with that...
                    code blocks are hard to get working
                    not really sure
                    it's been a while since I worked on it
                    I had a polyglot file I was working on, I got a lot of languages working
                    Python, Bash, R, Javascript,
                    I got C working too
11:06 AM Shakthi Kannan: Rajesh:
11:07 AM Aaron Hall: cheers, this was fun!

Aaron Hall left group chat.
Dhavan Vaidya left group chat.
Rajesh Deo left group chat.
Google Apps left group chat.
Suraj Ghimire left group chat.

A screenshot of the Google Hangout session is shown below:

Google Hangout screenshot

We can try a different online platform for the next meetup (Monday, February, 19, 2018). I would like to have the meetup on the third Monday of every month. Special thanks to Sacha Chua for her valuable inputs in organizing the online meetup session.

January 22, 2018 05:00 PM

January 17, 2018

Mario Jason Braganza

Carpe Diem!

(This is a rambling, introspective post, with no particular point to it, other than a reminder to my self to do better.)


Kushal Das, wrote a lovely piece on inclusivity and generosity of spirit.
What hit me though, (ergo this note to myself), was his thundering twist of a climax

He goes through the post talking about how his life’s been one roller coaster of highs and lows and people pulling him down like crabs in a barrel, yet other mentors pushing him hard to do his best.

And then he ends with

You don’t have to bow down in front of anyone, you can do things you love in your life without asking for others permissions.

Like Steven Pressfield, tells Jeff Goins

At what point can someone who writes call himself a writer?

When he turns pro in his head. You are a writer when you tell yourself you are. No one else’s opinion matters. Screw them. You are when you say you are.

I wish I had learnt this so much earlier in life.
In a strange fit of domain blindness I somehow translated “Carpe Diem!” as doing my best work, but for others!

I spent close to ten years of my life learning skills, getting better yet lacking the courage to do what I wanted to do. Maybe if I wasn’t so chicken or worked extra hard for myself, things might have turned out differently for me too, instead of me being here, all of thirty-nine, wondering where the years went.

But thanks to the wife and her courage, I was inspired too!

I realised that I could not wait for life to hand me opportunities on a platter.
I could not wait for all my problems to go away, before I could make a risk free change.
I have only one life to live, and I don’t want to see myself ten, twenty, fifty years down the road, once again ruing the choices I made and the chances I did not take.

And the other related thing / flaw / weakness that I got over last year, was that I stopped waiting for people to give me permission.
I used to think, that if people were older, more experienced, they would automatically be more wise, in all domains of life.

Now I know through bitter experience that, that is simply not true.
I am smarter, much smarter than most folks in some areas and dumber in most others.
The same holds true for other folk!

So it’s all up to me, to build myself up, to learn more, put myself out there and make something of myself, trusting in myself and amor fati.[1]

Like Horace wrote over two thousand years ago …

dum loquimur, fugerit invida
aetas: carpe diem, quam minimum credula postero.

In the moment of our talking, envious time has ebb’d away.
Seize the present; trust tomorrow e’en as little as you may.

And to wrap it up even more succinctly, here’s Steve Jobs[2], driving the point home (transcript below)

So, the thing I would say is … When you grow up you tend to get told the world is the way it is and your … your life is just to live your life inside the world, try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money.

But life … That’s a very limited life. Life can be much broader once you discover one simple fact, and that is …

Everything around you that you call life, was made up by people that were no smarter than you.
And you can change it, you can influence it, you can build your own things that other people can use.

And the minute that you understand that you can poke life and actually something will, you know if you push in, something will pop out the other side, that you can change it, you can mold it. That’s maybe the most important thing. It’s to shake off this erroneous notion that life is there and you’re just gonna live in it, versus embrace it, change it, improve it, make your mark upon it.

I think that’s very important and however you learn that, once you learn it, you’ll want to change life and make it better, cause it’s kind of messed up, in a lot of ways.
Once you learn that, you’ll never be the same again.

  1. The fates will bring what they will. All I can do is accept it, love it. ↩︎

  2. part of my circle of the eminent dead ↩︎

by Mario Jason Braganza at January 17, 2018 01:00 PM

Kushal Das

How to configure Tor onion service on Fedora

You can set up a Tor onion service in a VM on your home desktop, or on a Raspberry Pi attached to your home network. You can serve any website, or ssh service using the same. For example, in India most of the time if an engineering student has to demo a web application, she has to demo on her laptop or on a college lab machine. If you set up your web application project as an onion service, you can actually make it available to all of your friends. You don’t need an external IP or special kind of Internet connection or pay for a domain name. Of course, it may be slower than all the fancy website you have, but you don’t have to spend any extra money for this.

In this post, I am going to talk about how can you set up your own service using a Fedora 26 VM. The similar steps can be taken in Raspberry Pi or any other Linux distribution.

Install the required packages

I will be using Nginx as my web server. The first step is to get the required packages installed.

$ sudo dnf install nginx tor
Fedora 26 - x86_64 - Updates                     10 MB/s |  20 MB     00:01
google-chrome                                    17 kB/s | 3.7 kB     00:00
Qubes OS Repository for VM (updates)             98 kB/s |  48 kB     00:00
Last metadata expiration check: 0:00:00 ago on Wed Jan 17 08:30:23 2018.
Dependencies resolved.
 Package                Arch         Version                Repository     Size
 nginx                  x86_64       1:1.12.1-1.fc26        updates       535 k
 tor                    x86_64         updates       2.6 M
Installing dependencies:
 gperftools-libs        x86_64       2.6.1-5.fc26           updates       281 k
 nginx-filesystem       noarch       1:1.12.1-1.fc26        updates        20 k
 nginx-mimetypes        noarch       2.1.48-1.fc26          fedora         26 k
 torsocks               x86_64       2.1.0-4.fc26           fedora         64 k

Transaction Summary
Install  6 Packages

Total download size: 3.6 M
Installed size: 15 M
Is this ok [y/N]:

Configuring Nginx

After installing the packages, the next step is to setup the web server. For a quick example, we will just show the default Nginx index page over this web service. We will have to change the web server port to a different one in /etc/nginx/nginx.conf file. Please read about Nginx to know more about how to configure Nginx with your web application.

listen 8090 default_server;

Here we have the web server running on port 8090.

Configuring Tor

Next, we will set up the Tor onion service. The configuration file is located at /etc/tor/torrc. We will add the following two lines.

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80

We are redirecting port 80 in the onion service to the port 8090 in the same system.

Starting the services

Remember to open up port 80 in the firewall before starting the services. I am going to keep it an exercise for the reader to find out how :)

We will start nginx and tor service as the next step, you can also watch the system logs to find out status of Tor.

$ sudo systemctl start nginx
$ sudo systemctl start tor
$ sudo journalctl -f -u tor
-- Logs begin at Thu 2017-12-07 07:13:58 IST. --
Jan 17 08:33:43 tortest Tor[2734]: Bootstrapped 0%: Starting
Jan 17 08:33:43 tortest Tor[2734]: Signaled readiness to systemd
Jan 17 08:33:43 tortest systemd[1]: Started Anonymizing overlay network for TCP.
Jan 17 08:33:43 tortest Tor[2734]: Starting with guard context "default"
Jan 17 08:33:43 tortest Tor[2734]: Opening Control listener on /run/tor/control
Jan 17 08:33:43 tortest Tor[2734]: Bootstrapped 5%: Connecting to directory server
Jan 17 08:33:44 tortest Tor[2734]: Bootstrapped 10%: Finishing handshake with directory server
Jan 17 08:33:44 tortest Tor[2734]: Bootstrapped 15%: Establishing an encrypted directory connection
Jan 17 08:33:45 tortest Tor[2734]: Bootstrapped 20%: Asking for networkstatus consensus
Jan 17 08:33:45 tortest Tor[2734]: Bootstrapped 25%: Loading networkstatus consensus
Jan 17 08:33:55 tortest Tor[2734]: I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Jan 17 08:33:55 tortest Tor[2734]: Bootstrapped 40%: Loading authority key certs
Jan 17 08:33:55 tortest Tor[2734]: Bootstrapped 45%: Asking for relay descriptors
Jan 17 08:33:55 tortest Tor[2734]: I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6009, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of exit bw = 0% of path bw.)
Jan 17 08:33:56 tortest Tor[2734]: Bootstrapped 50%: Loading relay descriptors
Jan 17 08:33:57 tortest Tor[2734]: Bootstrapped 56%: Loading relay descriptors
Jan 17 08:33:59 tortest Tor[2734]: Bootstrapped 65%: Loading relay descriptors
Jan 17 08:34:06 tortest Tor[2734]: Bootstrapped 72%: Loading relay descriptors
Jan 17 08:34:06 tortest Tor[2734]: Bootstrapped 80%: Connecting to the Tor network
Jan 17 08:34:07 tortest Tor[2734]: Bootstrapped 85%: Finishing handshake with first hop
Jan 17 08:34:07 tortest Tor[2734]: Bootstrapped 90%: Establishing a Tor circuit
Jan 17 08:34:08 tortest Tor[2734]: Tor has successfully opened a circuit. Looks like client functionality is working.
Jan 17 08:34:08 tortest Tor[2734]: Bootstrapped 100%: Done

There will be a private key and the hostname file for the onion service in the /var/lib/tor/hidden_service/ directory. Open up Tor browser, and visit the onion address. You should be able to see a page like below screenshot.

Remember to backup the private key file if you want to keep using the same onion address for a longer time.

What all things can we do with this onion service?

That actually depends on your imagination. Feel free to research about what all different services can be provided over Tor. You can start with writing a small Python Flask web application, and create an onion service for the same. Share the address with your friends.

Ask your friends to use Tor browser for daily web browsing. The more Tor traffic we can generate, the more difficult it will become for the nation-state actors to try to monitor traffics, that in turn will help the whole community.

WARNING on security and anonymous service

Remember that this tutorial is only for quick demo purpose. This will not make your web server details or IP or operating system details hidden. You will have to make sure of following proper operational security practices along with system administration skills. Riseup has a page describing best practices. But, please make sure that you do enough study and research before you start providing long-term services over the Tor.

Also please remember that Tor is developed and run by people all over the world and the project needs donation. Every little bit of help counts.

by Kushal Das at January 17, 2018 05:53 AM

January 16, 2018

Kushal Das

Do not limit yourself

This post is all about my personal experience in life. The random things I am going to write in this post, I’ve talked about in many 1x1 talks or chats. But, as many people asked for my view, or suggestions on the related topics, I feel I can just write all them down in one single place. If you already get the feeling that this post will be a boring one, please feel free to skip. There is no tl;dr version of it from me.

Why the title?

To explain the title of the post, I will go back a few years in my life. I grew up in a coal mine area of West Bengal, studied in the village’s Bengali medium school. During school days, I was very much interested in learning about Science, and kept doing random experiments in real life to learn things. They were fun. And I learned life lessons from those. Most of my friends, school teachers or folks I knew, kept telling me that those experiments were impossible, or they were beyond my reach. I was never a class topper, but once upon a time I wanted to participate in a science exam, but the school teacher in charge told me that I was not good enough for it. After I kept asking for hours, he finally said he will allow me, but I will have to get the fees within the next hour. Both of my parents were working, so no chance of getting any money from them at that moment. An uncle who used to run one of the local book stores then lent me the money so that I could pay the fees. The amount was very small, but the teacher knew that I didn’t get any pocket money. So, asking for even that much money within an hour was a difficult task. I didn’t get a high score in that examination, but I really enjoyed the process of going to a school far away and taking the exam (I generally don’t like taking written exams).

College days

During college days I spent most of my time in front of my computer at the hostel, or in the college computer labs. People kept laughing at me for the same, batchmates, juniors, seniors, or sometimes even professors. But, at the same time I found a few seniors and friends, and professors who kept encouraging whatever I did. The number of people laughing at me were always higher. Because of the experience during school days, I managed to ignore those.

Coming to the recent years

The trend continued through out my working life. There are always more people who kept laughing at everything I do. They kept telling me that the things I try to do, do not have any value and beyond my limit. I don’t see myself as one of those bright developers I meet out in the world. I kept trying to do things I love, tried to help the community whichever way possible. What ever I know, I learned because someone else took time to teach me, took time to explain it to me. Now, I keep hearing the similar stories from many young contributors, my friends, from India. Many times I saw how people kept laughing at my friends in the same way they do at me. They kept telling my friends that the things they are trying to achieve are beyond their limit. I somehow managed to meet many positive forces in my life, and I keep meeting the new ones. This helped me to put in my mind that we generally bound ourselves in some artificial limits. Most of the folks laughing at us, never tried anything in life. It is okay if we can not write or speak the perfect English like them, English is not our primary language anyway. We can communicate as required. The community out there welcomes everyone as they are. We don’t have to invent the next best programming language, or be the super rich startup person to have good friends in life. One can always push at personal level, to learn new things. To do things which makes sense to each of us. That maybe is totally crazy in other people’s life. But, it is okay to try things as you like. Once upon a time, during a 1x1 with my then manager (and lifelong mentor) Sankarshan Mukhopadhyay, he told me something which remained with me very strong to this day. We were talking about things I can do, or rather try to do. By taking another example of one of my good friends from Red Hat, he explained to me that I may think that my level is nowhere near to this friend. But, if I try to learn and do things like him, I may reach 70% level, or 5% or 50%. Who knows unless I try doing those new things. While talking about hiring for the team, he also told me about how we should always try to get people who are better than us, that way, we always will be in a position to learn from each other I guess those words together changed many things in my life. The world is too large, and we all can do things in our life at certain level. But, what we can do depends on where we draw those non-existing limits in our lives.

The Python community is one such example, when I went to PyCon US for the first time in 2013, the community welcomed me the way I am. Even though almost no one knew me, I never felt that while meeting and talking to my life time heroes. Funny that in the same conference, a certain senior person from India tried to explain that I should start behaving like a senior software engineer. I should stand in the corner with all the world’s ego, and do not talk to everyone the way I do. Later in life, the same person tried to convince me that I should stop doing anything related to community as that will not help me to make any money.

Sorry, but they are wrong in that point. I never saw any of my favorite human beings doing that. Does not matter how senior people are, age or experience wise, they always listen to others, talk nicely with everyone. Money is not everything in life. I kept jumping around in PyCon every year, kept clicking photos or talking with complete strangers about their favorite subjects. Those little conversations later become much stronger bonds, I made new friends whom I generally meet only once in a year. But, the community is still welcoming. No one cared to judge me based on how much money I make. We tried to follow the same in dgplug. The IRC channel #dgplug on Freenode is always filled with folks from all across the world. Some are very experienced contributors, some are just starting. But, it is a friendly place, we try to help each other. The motto of Learn yourself, teach others is still very strong among us. We try to break any such stupid limits others try to force on our lives. We dream, we try to enjoying talking about that book someone just finished. We discuss about our favorite food. I will end this post saying one thing again. Do not bound yourself in some non existing limits. Always remember, What a great teacher, failure is (I hope I quoted Master Yoda properly). Not everything we will try in life will be a super successful thing, but we can always try to learn from those incidents. You don’t have to bow down in front of anyone, you can do things you love in your life without asking for others’ permissions.

by Kushal Das at January 16, 2018 04:17 AM

January 15, 2018

Mario Jason Braganza

An Idea to Get Me Writing Regularly


I stuggle to write regularly.

Sometimes, I struggle to write, because I can’t think of anything to write.
And sometimes, I struggle, because I have a deluge of ideas.

So I want to get this wisp of an idea down on the page, before I lose it.

  1. If I think it is of any interest to me, I should write it down in my own words.
  2. I will not pick up another book, before I write what I think of it, or write down the notes I highlighted.
    Even if it’s only a line, I ought to write it down, instead of just using Librarything or Goodreads to say I’ve read it.
  3. If I learn it, I should write it.
    I’ve already forgotten, how I got a carousel installed on my personal blog.
    I should write shit down.
  4. Drastically reduce twitter and rss use.
  5. Copy Kushal and have a regular weekly cadence.
    That’ll give me at least fifty two posts a year.

Well, that’s the idea … written down.

All I have to do now, is do it.

by Mario Jason Braganza at January 15, 2018 01:29 AM

January 04, 2018

Saptak Sengupta

CSS Masking: Mask of Batman

Recently, I have been watching videos on CSS to know more about the various interesting features in CSS such as grid layout, flex, filter and so on. After all, it is one of the Turing Complete languages. One way I like to learn more about languages is by watching conference talks. So, I was going through the different talks in CSSConf 2015 when I came across this talk by Tim Holman titled Fun.css. It is a really fun talk but one thing that really caught my eyes was the transition between slides using a star wipe. I searched to see if I could find the code for how to do it with purely CSS but didn't get much help. This is when I decided to give it a try myself. But why Star? I would make a Bat Swipe.

So this is the final prototype I made using just CSS without any Javascript.

See the Pen Bat Swipe by Saptak Sengupta (@SaptakS) on CodePen.

The main CSS property that is used in the above example is mask-image and associated CSS Masking properties. What masking does is hiding parts of the visual section of the webpage by the mask layer. there are two ways of masking - CSS or SVG. So you can either use a CSS gradient or luminance to create a mask, or use an SVG image. In our example, we are going to take about how to use SVG.

<div class="content">
<div class="wipe">
<div class="content black">
<p>I am Batman!</p>

So, in the HTML code we basically have 2 layers. To create that, I made a <div class="content"> with the actual content inside a <div class="wipe">. So the text of the class "content" must remain hidden until the wipe class reveals the content.

Now it's time to implement the CSS code to make this thing happen.
.wipe {
mask-image: url(;
mask-repeat: no-repeat;
mask-position: 50% 50%;
mask-size: 0;

The above CSS code applies the mask over the content class so that only a part of the content is visible. Here we use mask-image which is a link to the SVG image which we want to use as our mask. Since it's a Bat Swipe, I have used the logo of Batman. You can use pretty much use any SVG image but I would suggest that you use a filled SVG image rather than line art. This is because the SVG masking uses the alpha factor of the image. Say in our case, since the Batman logo is a filled black image on top of a white background, so when we apply it as a mask, the visual area behind the black part of the image is visible, and the area behind white part gets masked or hidden.

The CSS properties mask-repeat and mask-position are used to ensure that there is only one mask no matter whatever the size of the mask and it stays always in the middle. You can modify these parameters based on your use case. Say, you wanted a polka dot mask style, you would want then want to have a black disk mask-image and mask-repeat as "repeat".

The CSS property mask-size is most important in our case to create the effect of swipe or reveal. By now, most of you must have already guessed that to get the effect, what we essentially need to do is increase the size of the image with time. I have done the entire animation of swiping on hover because it is the easiest way to display a transition with only CSS.

So at the beginning, we keep the mask-size as 0 and then we add a pseudo hover property to increase the mask-size to 500% (enough to cover the entire page over most resolutions) with a transition.

.wipe:hover {
mask-size: 500%;
transition: 3s;

We apply a transition over 3s using the CSS transition property. So, if you would like a really slow animation instead of a swiping action you can just increase the value of transition property so that the transition of size happens over a longer duration of time.

The only last thing remaining is cross-browser compatibility. So will this code work in all browsers? Sadly, no. As of now, the mask-image and associated CSS properties are fully supported only in Firefox since version 53. However, you can make it work in Google Chrome, Opera and Safari using the prefix "-webkit-". So if you add -webkit-mask-image property as well, then it starts working in chrome and safari as well. Sadly, IE, Edge and Opera Mini doesn't provide any kind of support.

by SaptakS ( at January 04, 2018 09:59 AM

December 31, 2017

Sanyam Khurana

9 essential questions I asked myself at the end of year 2017

I read an article on medium about 9 Essential Questions Everyone Should Ask Themselves At The End of The Year. This motivated me to ask myself these questions.

Yes, indeed the end of December put me in a reflective mood to decipher how the year 2017 was for me. My learnings from these 12 months would definitely help me in pointing to a direction to improve myself.

If you had to describe these previous 12 months, in a sentence, what would that sentence be?

No matter where you stand today; perseverance is something that can get you anything.

Ask yourself — which one event, big or small, is something that you will still talk about in 5 years?

My contributions in CPython were recognized; I got Developer role on Community trusted me with such privileges and that is something really special :)

What successes, accomplishments, wins, great news and compliments happened this year? How did you feel? What single achievement are you most proud of? Moreover, why?

It was contributing (small) changes to improve the Python programming language. The best news I got was indeed of getting promoted to bug triager for CPython. It feels absolutely mesmerizing. A huge part of the folks who code in Python are already using features that I developed.

It feels so encouraging when someone reports a bug on my feature. More than that, there was a flaw in it's working, what gives me happiness is that they were using it.

It is the feeling to help people all over the world.

Besides that, I am helping a few students to learn to program and contribute to Open Source. That makes me immensely happy.

And last but not the least, I learned to play a few songs on Guitar :)

Did something prevent you, or you used the “Excuse Card” too much?

Yes, indeed. Although it is important to be persistent and work with perseverance, it is also important to know what should be done and where to really apply our efforts.

I remember this just corresponds to what I learned in Physics about directional cosines for Force. You need to apply force, but somewhere and somehow ensuring that the angle is small; that would have the maximum impact on your persistent force.

So, while applying effort is important, it is much more important to apply it in the right direction. Not applying the force in the right direction made it real hard to move on stuff.

Which of your personal virtues or qualities turned out to be the most helpful this year?

I learned that it is important to prioritize things. It is important to Get things done. We just need to do small tasks in a simple manner; the big things will take care of themselves.

Being persistent with stuff was most helpful to me in this year.

Who was your number one go-to person that you could always rely on?

There are a lot of people that helped me in various things. The list is really long and might bloat up this blog post. So, I'll keep it off for another day.

But I'm grateful to a lot of folks, who helped me in different phases of my life.

What Was The Most Common Mental State This Year?

It was indeed a roller-coaster ride. It was full of ups and downs; sometimes several times a month. With many important learnings, life happened.

Sylvester Stallone, in his role of Rocky Balboa, had an amazing speech:

Let me tell you something you already know. The world ain’t all sunshine and rainbows. It is a very mean and nasty place, and I do not care how tough you are it will beat you to your knees and keep you there permanently if you let it. You, me, or nobody is gonna hit as hard as life. However, it ain’t about how hard ya hit. It is about how hard you can get hit and keep moving forward. How much you can take and keep moving forward. That is how winning is done!”

I felt euphoric with only helping others. May it be helping students to learn & contributing to FOSS or just helping folks with my work in Open Source. All of it triggers curiosity, excitement, and enthusiasm.

What’s The Difference Between You on January 1st of 2017 vs You Right Now?

  • If you were to write a short biography about yourself right now, what would you say?
  • How would you describe yourself? What the best thing about you?
  • How about one year ago? What are the main differences?

I learned to appreciate the things I've right now. It is the best I could've got. There is much more to accomplish; for which I'll keep working.

I completely respect and appreciate the things I've faced. It made me stronger, wiser & more confident.

I appreciate every single thing. Every single person that I know :)

What Are You Grateful For?

I am grateful to someone who made me realized that I wasn't good enough. That triggered enough fire which fueled enough motivation to keep me going throughout the years. Year after year, it never ends, just continues...

Alright, this marks the end of the post. Overall, the year was full of excitement, learnings, new friends, few vacations and what not :)

by Sanyam Khurana at December 31, 2017 07:25 AM

December 28, 2017

Shakthi Kannan

Ansible deployment of Graphite

[Published in Open Source For You (OSFY) magazine, July 2017 edition.]


In this fifth article in the DevOps series we will learn to install and set up Graphite using Ansible. Graphite is a monitoring tool that was written by Chris Davis in 2006. It has been released under the Apache 2.0 license and comprises three components:

  1. Graphite-Web
  2. Carbon
  3. Whisper

Graphite-Web is a Django application and provides a dashboard for monitoring. Carbon is a server that listens to time-series data, while Whisper is a database library for storing the data.

Setting it up

A CentOS 6.8 Virtual Machine (VM) running on KVM is used for the installation. Please make sure that the VM has access to the Internet. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is The ansible/ folder contains the following files:


The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

graphite ansible_host= ansible_connection=ssh ansible_user=root ansible_password=password

Also, add an entry for the graphite host in /etc/hosts file as indicated below: graphite


The playbook to install the Graphite server is given below:

- name: Install Graphite software
  hosts: graphite
  gather_facts: true
  tags: [graphite]

    - name: Import EPEL GPG key
        state: present

    - name: Add YUM repo
        name: epel
        description: EPEL YUM repo
        gpgcheck: yes

    - name: Update the software package repository
        name: '*'
        update_cache: yes

    - name: Install Graphite server
        name: "{{ item }}"
        state: latest
        - graphite-web

We first import the keys for the Extra Packages for Enterprise Linux (EPEL) repository and update the software package list. The ‘graphite-web’ package is then installed using Yum. The above playbook can be invoked using the following command:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "graphite"


A backend database is required by Graphite. By default, the SQLite3 database is used, but we will install and use MySQL as shown below:

- name: Install MySQL
  hosts: graphite
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [database]

    - name: Install database
        name: "{{ item }}"
        state: latest
        - mysql
        - mysql-server
        - MySQL-python
        - libselinux-python

    - name: Start mysqld server
        name: mysqld
        state: started

    - wait_for:
        port: 3306

    - name: Create graphite database user
        name: graphite
        password: graphite123
        priv: '*.*:ALL,GRANT'
        state: present

    - name: Create a database
        name: graphite
        state: present

    - name: Update database configuration
        path: /etc/graphite-web/
        block: |
          DATABASES = {
            'default': {
            'NAME': 'graphite',
            'ENGINE': 'django.db.backends.mysql',
            'USER': 'graphite',
            'PASSWORD': 'graphite123',

    - name: syncdb
      shell: /usr/lib/python2.6/site-packages/graphite/ syncdb --noinput

    - name: Allow port 80
      shell: iptables -I INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

    - name:
        path: /etc/httpd/conf.d/graphite-web.conf
        insertafter: '           # Apache 2.2'
        line: '           Allow from all'

    - name: Start httpd server
        name: httpd
        state: started

As a first step, let’s install the required MySQL dependency packages and the server itself. We then start the server and wait for it to listen on port 3306. A graphite user and database is created for use with the Graphite Web application. For this example, the password is provided as plain text. In production, use an encrypted Ansible Vault password.

The database configuration file is then updated to use the MySQL credentials. Since Graphite is a Django application, the script with syncdb needs to be executed to create the necessary tables. We then allow port 80 through the firewall in order to view the Graphite dashboard. The graphite-web.conf file is updated to allow read access, and the Apache web server is started.

The above playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "database"

Carbon and Whisper

The Carbon and Whisper Python bindings need to be installed before starting the carbon-cache script.

- name: Install Carbon and Whisper
  hosts: graphite
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [carbon]

    - name: Install carbon and whisper
        name: "{{ item }}"
        state: latest
        - python-carbon
        - python-whisper

    - name: Start carbon-cache
      shell: /etc/init.d/carbon-cache start

The above playbook is invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "carbon"


You can open in the browser on the host to view the Graphite dashboard. A screenshot of the Graphite web application is shown below:

Graphite dashboard


An uninstall script to remove the Graphite server and its dependency packages is required for administration. The Ansible playbook for the same is available in playbooks/admin folder and is given below:

- name: Uninstall Graphite and dependencies
  hosts: graphite
  gather_facts: true
  tags: [remove]

    - name: Stop the carbon-cache server
      shell: /etc/init.d/carbon-cache stop

    - name: Uninstall carbon and whisper
        name: "{{ item }}"
        state: absent
        - python-whisper
        - python-carbon

    - name: Stop httpd server
        name: httpd
        state: stopped

    - name: Stop mysqld server
        name: mysqld
        state: stopped

    - name: Uninstall database packages
        name: "{{ item }}"
        state: absent
        - libselinux-python
        - MySQL-python
        - mysql-server
        - mysql
        - graphite-web

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-graphite.yml


  1. Graphite documentation.

  2. Carbon.

  3. Whisper database.

December 28, 2017 04:00 PM

December 20, 2017

Sanyam Khurana

Promoted to bug-triager for CPython

This is huge! I couldn't believe when I just woke up next to this mail:

Sanyam Khurana has been promoted

Yes, I got promoted to Developer role on, which along with other privileges, provides access to close bugs. But with great powers, comes great responsibilities. Closing bugs means that the information is lost forever. So, utmost care is to be taken and reported on why a bug is closed (which might even involve writing code/scripts that prove it :)).

Victor Stinner is mentoring me and a few other folks who have been promoted to learn more about contributing to CPython code base. I've been reading the dev-guide and understanding the entire process to be followed.

Recently we've been also practicing to review Pull Requests along with reporting bugs and contributing code.

I hope to learn more about the process and contribute more to CPython. I wanted to write this post to specially thank Kushal Das and Nick Coghlan who helped me getting started with CPython in Feb 2017 during PyCon Pune sprints.

Also, thanks to Victor Stinner and Ezio Melotti for providing me those privileges.

I hope to get a better understanding of the code base and contribute more ;)

by Sanyam Khurana at December 20, 2017 03:04 PM

December 05, 2017

Jaysinh Shukla

Book review ‘Docker Up & Running’

book image docker up and running

In the modern era of software engineering, terms are coined with a new wrapper. Such wrappers are required to make bread-and-butter out of it. Sometimes good marketed terms are adopted as best practices. I was having a lot of confusion about this Docker technology. Even I was unfamiliar with the concept of containers. My certain goal was to get a higher level overview first and then come to a conclusion. I started reading about the Docker from its official getting started guide. It helped me to host this blog using Docker, but I was expecting some more in-depth overview. With that reason, I decided to look for better resources. By reading some Quora posts and Goodreads reviews, I decided to read “Docker Up & Running by K. Matthias and S. Kane”. I am sharing my reading experience here.


The book provides a nice overview of Docker toolchain. It is not a reference book. Even though few options are deprecated, I will advise you to read this book and then refer the official documentation to get familiar with the latest development.

Detailed overview

I got a printed copy at nearly 450 INR (roughly rounding to 7 USD, where 1 USD = 65 INR) from Amazon. The prize is fairly acceptable with respect to the print quality. The book begins with a little history of containers (Docker is an implementation of the container). Initial chapters give a higher level overview of Docker tools combining Docker engine, Docker image, Docker registry, Docker compose and Docker container. Authors have pointed out situations where Docker is not suitable. I insist you do not skip that topic. I skipped the dedicated chapter on installing Docker. I will advise you to skip irrelevant topics because the chapters are not interlinked. You should read chapter 5 discussing the behavior of the container. That chapter cleared many of my confusions. Somehow I got lost in between, but re-reading helped. Such chapters are enough to get a general idea about Docker containers and images. Next chapters are focused more on best practices to setup the Docker engine. Frankly, I was not aware of possible ways to debug, log or monitor containers at runtime. This book points few expected production glitches that you should keep in mind. I didn’t like the depicted testing workflow by authors. I will look for some other references which highlight more strategies to construct your test workflow. If you are aware of any, please share them with me via e-mail. I know about achieving auto-scaling using various orchestration tools. This book provides step by step guidance on configuring and using them. Mentioned tools are Docker Swarm, Centurion and Amazon EC2 container service. Unfortunately, the book is missing Kubernets and Helios here. As a part of advanced topics, you will find a comparison of various filesystems with a shallow overview of how Docker engine interacts with them. The same chapter is discussing available execution drivers and introduces LXC as another container technology. This API option is deprecated by Docker version 1.8 which makes libcontainer the only dependency. I learned how Docker containers provide the virtualization layer using Namespaces. Docker limits the execution of container using CGroups (Control Groups). Namespaces and CGroups are GNU/Linux level dependencies used by Docker under the hood. If you are an API developer, then you should not skip Chapter 11. This chapter discusses two well-followed patterns Twelve-Factor App and The Reactive manifesto. These guidelines are helpful while designing the architecture of your services. The book concludes with further challenges of using Docker as a container tool.

One typo I found at page number 123, second last line.

expore some of the tools... 

Here, expore is a typo and it should be

explore some of the tools... 

I have submitted it to the official errata. At the time of writing this post, it has not confirmed by authors. Hope they will confirm it soon.

Who should read this book?

  • Developers who want to get an in-depth overview of the Docker technology.

  • If you set up deployment clusters using Docker, then this book will help you to get an overview of Docker engine internals. You will find security and performance guidelines.

  • This is not a reference book. If you are well familiar with Docker, then this book will not be useful. In that case, the Docker documentation is the best reference.

  • I assume Docker was not supporting Windows platform natively when the book was written. The book focuses on GNU/Linux platform. It highlights ways to run Docker on Windows using VMs and Boot2Docker for Non-Linux VM-based servers.

What to keep in mind?

  • Docker is changing rapidly. There will be situations where mentioned options are deprecated. In such situation, you have to browse the latest Docker documentation and try to follow them.

  • You will be able to understand the official documentation better after reading this book.


  • Your GNU/Linux skills are your Docker skills. Once you understand what the Docker is, then your decisions will become more mature.
Proofreaders: Dhavan Vaidya, Polprog

Printed Copy

by Jaysinh Shukla at December 05, 2017 05:56 AM

November 30, 2017

Saptak Sengupta

Science Hack Day India, 2017

So, finally, managed to clear up some time to write about the best event of the year I have attended - Science Hack Day India, 2017. This was my second time to Science Hack Day, India. SHD 2016 was so phenomenal, there was no way I was missing it this time either. Phenomenal more because of the wonderful people I got to meet and really connect with because the entire atmosphere about the event is more like an informal, friendly unconference type. This year it was no different.

                Picture Credit: Sayan Chowdhury

Science Hack Day 2017 was truly bigger, better and even more fun than last year. Happening at one of the most happening venues, Sankalp Bhumi Farm, just the stay is so lovely, one doesn't need much other reason to attend it. Unlike last time, this year I had two friends accompanying me to Science Hack Day. We reached early morning in the 0th day. Like all conference, it was really good to meet everyone whom I personally was meeting maybe after 6 months, or an year or maybe for the very first time. There were general discussions about who is working on what and the new terminal emulator they are using, or the nginx trick that they might be using or the new great open source software they came across. But this is something everyone knows happens when techies meet. What mostly people don't know is thing like cycling and kayaking that we do. So most of the afternoon was rather spent in cycling and kayaking by everyone and having fun rather than any serious discussion at all. In the evening there was an informal introduction by everyone so that to get a little accustomed. After dinner, everyone bid goodnight and went to sleep.

But have you ever heard geeks sleeping just after dinner? Obviously not. So it was a matter of time before everyone again re-grouped at the hackerspace which was setup for the next day. Then I and Farhaan had the privilege of listening stories from a dreamy Sayan Chowdhury which marked the end of the day for us.

Next morning, after breakfast, it was time for mentor introduction which was followed by a great basic idea about how an aeroplane flight works. Reminded me of my science classes and I started wishing that we had similar explanation using a proper unmanned aircraft. And it wasn't just theory and theory, but we got to see that aircraft actually fly. This marked the actual notion of a hack day that we don't just talk, we make and also break stuff. After this it was time to start with our hacks. Unlike my plans before both me and my friends started working on assembling of a 3D printer which was mainly brought for Hackerspace Belgaum. I always thought what is the big deal in assembling but I realised I was so wrong.

The entire assembling took all day since we were doing for the first time and were figuring out stuff as we went with it. I mostly attached parts while my smarter friends figured everything out and told me what to attach where. By dinner it was ready and assembled. And I was like "Yay! Let's start printing". That is when Siddhesh told me that the trickiest part is yet to be done, that is calibration. So we got started with it. When calibration was all set and done it was time to print. So we decided to print the "Hello World" of 3D printing, i.e. a cube. So the cube started printing, the first layer got printed, the second layer got printed and by the third layer, everything came off. We realised the bed wasn't heating.

A little disappointed we settled for the day and went off to bed. Next day we decided to use glue to make the bed somewhat sticky. This time it printed. Not so perfectly but mostly all good. I have never been more excited to see a tiny little white cube and neither have I seen so many other people behave the same. After that it was time for rocket flying followed by a group photo. The event was marked with the project presentation by every team.

Hoping to come back again next year.

by SaptakS ( at November 30, 2017 03:21 PM

November 14, 2017

Jaysinh Shukla

My experience of mentoring at Django Girls Bangalore 2017



Last Sunday, Django Girls Bangalore organized a hands-on session of web programming. This is a small event report from my side.

Detailed overview

Django Girls is not for a profit initiative led by Ola Sitarska and Ola Sendecka. Such movement helps women to learn the skills of website development using the well-known web-framework Django. This community is backed by organizers from many countries. Organizations like The Python Software Foundation, Github, DjangoProject and many more are funding Django Girls.

Django Girls Bangalore chapter was organized by Sourav Singh and Kumar Anirudha. This was my second time mentoring for the Django Girls event. First was for the Ahmedabad chapter. The venue was sponsored by HackerEarth. 8 male and 2 female mentored 21 women during this event. Each mentor was assigned more or less 3 participants. Introducing participants with web development becomes easy with the help of Django Girls handbook. The Django Girls handbook is a combination of beginner-friendly hands-on tutorials described in a simple language. The handbook contains tutorials on basics of Python programming language to deploying your web application. Pupils under me were already ready by pre-configuring Python with their workstation. We started by introducing our selves. We took some time browsing the website of undersea cables. One of the amusing questions I got is, “Isn’t the world connected with Satellites?”. My team was comfortable with the Python, so we quickly skimmed to the part where I introduced them to the basics of web and then Django. I noticed mentors were progressing according to the convenience of participants. Nice amount of time was invested in discussing raised queries. During illumination, we heard a loud call for the lunch. A decent meal was served to all the members. I networked with other mentors and participants during the break. Post-lunch we created a blog app and configured it with our existing project. Overview of Django models topped with the concept of ORM became the arduous task to explain. With the time as a constraint, I focused on admin panel and taught girls about deploying their websites to PythonAnywhere. I am happy with the hard work done by my team. They were able to demonstrate what they did to the world. I was more joyful than them for such achievement.

The closing ceremony turned amusing event for us. 10 copies of Two scoops of Django book were distributed to the participants chosen by random draw strategy. I solemnly thank the authors of the book and for gifting such a nice reference. Participants shared their experiences of the day. Mentors pinpointed helpful resources to look after. They insisted girls would not stop at this point and open their wings by developing websites using skills they learned. T-shirts, stickers and badges were distributed as event swags.

You can find the list of all Django Girls chapters here. Djangonauts are encouraged to become a mentor for Django Girls events in your town. If you can’t finding any in your town, I encourage you to take the responsibility and organize one for your town. If you are already a part of Django Girls community, Why are you not sharing your experience with others?

Proofreaders: Kushal Das, Dhavan Vaidya, Isaul Vargas

by Jaysinh Shukla at November 14, 2017 02:31 AM

October 22, 2017

Samikshan Bairagya

Notes on Tensorflow and how it was used in ADTLib

Its been almost 2 years since I have been an amateur drummer (which apparently is also the time since my last blog) and I have always felt that it would be great to have something that can provide me with drum transcriptions from a given music source. I researched a bit and came across a library that provides an executable as well as an API that can be used to generate drum tabs (consisting of hi-hats, snare and the kick drum) from a music source. Its called ADTLib. This isn’t extremely accurate when one tests it and I’m sure the library will only get better with more data sources available to train the neural networks better but this was a definitely a good place to learn a bit about neural networks and libraries like Tensorflow. This blog post is basically meant serve as my personal notes wrt how Tensorflow has been used in ADTLib.

So to start off, the ADTLib source code actually doesn’t train any neural networks. What ADTLib essentially does is feed a music file through a pre-trained neural network to give us the automatic drum transcriptions in text as well as PDF form. We will need to start off by looking at two methods and one function inside

  • methods create() and implement() belonging to class SA
  • function system_restore()

In system_restore() we initialise an instance of SA and call the create() method. There are a lot of parameters that are initialised when we create the neural network graph. We’ll not go into the details of those. Instead let’s look at how Tensorflow is used inside the SA.create() method. I would recommend reading this article on getting started with Tensorflow before going ahead with the next part of this blog post.

If you’ve actually gone through that article you’d know by now that Tensorflow creates graphs that implement a series of Tensorflow operations. The operations flow through the units of the graphs called ‘tensors’ and hence the name ‘tensorflow’. Great. So, getting back to the create() method, we find that first a tf.reset_default_graph() is called. This resets the global variables of the graph and clears the default graph stack.

Next we call a method weight_bias_init(). As the name suggests this method initialises the weights and biases for our model. In a neural network, weights and biases are parameters which can be trained so that the neural network outputs values that are closest to the target output. We can use ‘variables’ to initialise these trainable parameters in Tensorflow. Take these examples from the weight_bias_init() code:

  • self.biases = tf.Variable(tf.zeros([self.n_classes]))
  • self.weights =tf.Variable(tf.random_normal([self.n_hidden[(len(self.n_hidden)-1)]*2, self.n_classes]))

self.biases is set to variable with an initial value which is defined by the tensor returned by tf.zeros() (returns a tensor with dimension=2 where all elements are set to 0. self.n_classes is set to 2 in the ADTLib code). self.weights is initialised to a variable defined by the tensor returned by tf.random_normal(). tf.random_normal() returns a tensor of the mentioned shape with random normal values (type float32) with a mean of 0.0 and a standard deviation of 1.0. These weights and biases are trained based on the type of optimisation function later on. In ADTLib no training is actually done wrt to these weights and biases. These parameters are loaded from pre-trained neural networks as I’ve mentioned before. However we need these tensors defined in order to be able to implement the neural network on input music source.

Next we initialise a few ‘placeholders’ and ‘constants’. Placeholders and constants are again ‘tensors’ and resemble units of the graph. Example lines from the code:

  • self.biases = tf.Variable(tf.zeros([self.n_classes]))
  • self.seq=tf.constant(self.truncated,shape=[1])

Placeholders are used when a graph needs to be provided external inputs. They can be provided values later on. In the above example we define a placeholder that is supposed to hold ‘float32’ values in an array of dimension [1, 1000, 1024]. (Don’t worry about how I arrived at these dimensions. Basically if you check the init() method for class SA, you’ll understand that ‘self.batch’ is structure of dimension [1000, 1024]). Constants as the name suggests hold constant values. In the above example, self.truncated is initialised to 1000. ‘shape’ is an optional paramaters that specifies the dimension of the resulting tensor. Here the dimension is set to [1].

Now, ADTLib uses a special type of recurrent neural networks called bidirectional recurrent neural networks (BRNN). Here neurons or cells of a regular RNN are split into two directions, one for positive time direction(forward states), and another for negative time direction(backward states). Inside the create() method, we come across the following code:
self.outputs, self.states= tf.nn.bidirectional_dynamic_rnn(self.fw_cell,
self.bw_cell, self.x_ph,sequence_length=self.seq,dtype=tf.float32)

This creates the BRNN with the two types of cells provided as parameters, the input training data, the length of the sequence (which is 1000 in this case) and the data type. self.outputs is a tuple (output_fw, output_bw) containing the forward and the backward RNN output Tensor.

The forward and backward outputs are concatenated and fed to the second layer of the BRNN as follows:

self.outputs2, self.states2= tf.nn.bidirectional_dynamic_rnn(self.fw_cell2,

We now have the graph that defines how the BRNN should behave. These next few lines of code in the create() method deals with something called as soft-attention. This answer on stack overflow provides an easy introduction to this concept. Check it out if you want to but I’ll not go much into those details. But what happens essentially is that the forward and backward output cells from the second layer are again concatenated and then furthur processed to ultimately get a self.presoft value which resembles (W*x+b) as seen below.

self.attention_m=[tf.tanh(tf.matmul(tf.concat((self.zero_pad_second_out[j:j+self.batch_size],tf.squeeze(self.first_out)),1),self.attention_weights[j])) for j in range((self.attention_number*2)+1)]
self.attention_s=tf.nn.softmax(tf.stack([tf.matmul(self.attention_m[i],self.sm_attention_weights[i]) for i in range(self.attention_number*2+1)]),0)
self.attention_z=tf.reduce_sum([self.attention_s[i]*self.zero_pad_second_out[i:self.batch_size+i] for i in range(self.attention_number*2+1)],0)

Next we come across self.pred=tf.nn.softmax(self.presoft). This basically decides what activation function to use for the output layer. In this case softmax activation function is used. IMO this is a good reference for different kind of activation functions.

We now move onto the SA.implement() method. This function takes an input audio data, processed by madmom to create a spectrogram. Next self.saver.restore(sess, self.save_location+'/'+self.filename) loads the respective parameters from pre-trained neural network files for respective sounds (hi-hat/snare/kick). These Tensorflow save files can be found under ADTLib/files. Once the parameters are loaded, the Tensorflow graph is executed using as following:
self.test_out.append(, feed_dict={self.x_ph: np.expand_dims(self.batch,0),self.dropout_ph:1}))

When this function is executed we get the test results and further processing is done (this process is called peak-picking) to get the onsets data for the different percussive components.

I guess that’s it. There are a lot of details that I have omitted from this blog, mostly because it would make the blog way longer. I’d like to thank the author of ADTLib (Carl Southall) who cleared some icky doubts I had wrt to the ADTLib code. There is also a web version of ADTLib that has been developed with an aim to gather more data to train the networks better. So contribute data if you can!

by Samikshan Bairagya at October 22, 2017 03:16 AM

October 18, 2017


Understanding RapidJson – Part 2

In my previous blog on Rapidjson, alot of people asked for a detailed example in the comments so here is part 2 of Understanding Rapidjson with a slightly detailed example. I hope this will help you all.

We will straightaway improve on my last example in the previous blog and modify the changeDom function to add more complex object to the DOM tree.

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
Document subdoc(&d.GetAllocator());
subdoc.SetObject(); // starting the object
Value arr(kArrayType); // the innermost array
 Value::AllocatorType allocator;
for (unsigned i = 0; i < 10; i++)
arr.PushBack(i, allocator); // adding values to array , this function expects an allocator object
// adding the array to its parent object and so on , finally adding it to the parent doc object
subdoc.AddMember("New", Value(kObjectType).Move().AddMember("Numbers",arr, allocator), subdoc.GetAllocator());
d.AddMember("testing",subdoc, d.GetAllocator()); // finally adding the sub document to the main doc object
d["f"] = true;

Here we are creating Value objects of type kArrayType and kObjectType and appending them to their parent node from innermost to outermost.

Before Manupulation
 "hello": "world",
 "t": true,
 "f": false,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
After Manupulation
 "hello": "c++",
 "t": false,
 "f": true,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
 "testing": {
     "New": {
         "Numbers": [

The above changeDom can also be written using prettywritter object as follows:

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
Document subdoc(&d.GetAllocator()); // sub-document
// old school write the json element by element
StringBuffer s;
PrettyWriter<StringBuffer> writer(s);
for (unsigned i = 0; i < 10; i++)
subdoc.Parse(s.GetString()); // Parsing the string written to buffer to form a sub DOM

d.AddMember("testing",subdoc, d.GetAllocator()); // Attaching the Sub DOM to the Main DOM object
d["f"] = true;

Happy Coding! Cheers.

More reads:

by subho at October 18, 2017 02:38 PM

September 28, 2017

Dhriti Shikhar

September Golang Bangalore Meetup

The September Golang Bangalore Meetup was conducted on Saturday, September 16, 2017 at DoSelect, Bengaluru. Around 25-30 people attended the meetup.

The meetup started at 10:15 with the first talk by Baiju Muthukadan who works at Red Hat India Pvt. Ltd., Bengaluru. He talked about “Testing techniques in Golang”.


Karthikeyan Annamalai  gave a lightning talk about “Building microservice with gRPC”. The slides related to his talk can be found here.


Dinesh Kumar gave an awesome talk about “Gotcha’s in Golang”.  The slides related to his talk could be found here and the code explained during the demo is here.


The last lightning talk of the meetup was by Akshat who works at Go-Jek. Akshat talked about “Building an asynchronous http client with retries and hystrix in golang“.


I thank Sanket Saurav, Mohommad Rafy for helping us to organize the September Golang Bangalore Meetup by providing venue and food at DoSelect. Also, I thank Sudipta Sen for helping us out with the meetup preparation.

by Dhriti Shikhar at September 28, 2017 08:47 AM