Planet dgplug

January 01, 2018

Kushal Das

2017 blog review

Around December 2016, I decided to work more on my writings. After asking around a few friends, and also after reading the suggestions from the masters, it boiled down to one thing. One has to write more. This is no shortcut.

So, I tried to do that through out 2017. I found early morning was the easiest time for me to read/write as there is not much noise, and most importantly, Py still sleeps :).

The biggest point while starting was about what to write? I tried to write about whatever I found useful to me, or things I am excited about. I wrote using FocusWriter (most of the time), and saved the documents as a plain text file (as I use Markdown format in my blog). I also received help from many of my friends, who were kind enough to review my writings. Having a second pair of eyes for the writings is really important, as they can help to not only find the errors, but also show you better ways to express yourself.

One of my weak point (from the childhood) is a small stock of words to express myself. But, that also means my sentences do not have any words which one has to search to find the meaning.

If I just look at the numbers, I wrote 60 blog posts in 2017, which is only 7 more than of 2016. But, the number of views of the HTML pages is more than doubled.

Did your writing skill improved a lot?

The answer is no. But, now, writing is much more easier than ever. I can sit down with any of my mechanical keyboards, and just typing out the things on my mind.

If you ask me about one single thing to read on this topic, I will suggest On Writing by Stephen King.

One thing still can not do on time is replying to emails. Kind of drowned in too many emails. I am trying to slowly unsubscribe from various lists I have joined over the years. I hope you will find the future blog posts useful in different ways.

by Kushal Das at January 01, 2018 05:42 PM

December 31, 2017

Sanyam Khurana

9 essential questions I asked myself at the end of year 2017

I read an article on medium about 9 Essential Questions Everyone Should Ask Themselves At The End of The Year. This motivated me to ask myself these questions.

Yes, indeed the end of December put me in a reflective mood to decipher how the year 2017 was for me. My learnings from these 12 months would definitely help me in pointing to a direction to improve myself.

If you had to describe these previous 12 months, in a sentence, what would that sentence be?

No matter where you stand today; perseverance is something that can get you anything.

Ask yourself — which one event, big or small, is something that you will still talk about in 5 years?

My contributions in CPython were recognized; I got Developer role on bugs.python.org. Community trusted me with such privileges and that is something really special :)

What successes, accomplishments, wins, great news and compliments happened this year? How did you feel? What single achievement are you most proud of? Moreover, why?

It was contributing (small) changes to improve the Python programming language. The best news I got was indeed of getting promoted to bug triager for CPython. It feels absolutely mesmerizing. A huge part of the folks who code in Python are already using features that I developed.

It feels so encouraging when someone reports a bug on my feature. More than that, there was a flaw in it's working, what gives me happiness is that they were using it.

It is the feeling to help people all over the world.

Besides that, I am helping a few students to learn to program and contribute to Open Source. That makes me immensely happy.

And last but not the least, I learned to play a few songs on Guitar :)

Did something prevent you, or you used the “Excuse Card” too much?

Yes, indeed. Although it is important to be persistent and work with perseverance, it is also important to know what should be done and where to really apply our efforts.

I remember this just corresponds to what I learned in Physics about directional cosines for Force. You need to apply force, but somewhere and somehow ensuring that the angle is small; that would have the maximum impact on your persistent force.

So, while applying effort is important, it is much more important to apply it in the right direction. Not applying the force in the right direction made it real hard to move on stuff.

Which of your personal virtues or qualities turned out to be the most helpful this year?

I learned that it is important to prioritize things. It is important to Get things done. We just need to do small tasks in a simple manner; the big things will take care of themselves.

Being persistent with stuff was most helpful to me in this year.

Who was your number one go-to person that you could always rely on?

There are a lot of people that helped me in various things. The list is really long and might bloat up this blog post. So, I'll keep it off for another day.

But I'm grateful to a lot of folks, who helped me in different phases of my life.

What Was The Most Common Mental State This Year?

It was indeed a roller-coaster ride. It was full of ups and downs; sometimes several times a month. With many important learnings, life happened.

Sylvester Stallone, in his role of Rocky Balboa, had an amazing speech:

Let me tell you something you already know. The world ain’t all sunshine and rainbows. It is a very mean and nasty place, and I do not care how tough you are it will beat you to your knees and keep you there permanently if you let it. You, me, or nobody is gonna hit as hard as life. However, it ain’t about how hard ya hit. It is about how hard you can get hit and keep moving forward. How much you can take and keep moving forward. That is how winning is done!”

I felt euphoric with only helping others. May it be helping students to learn & contributing to FOSS or just helping folks with my work in Open Source. All of it triggers curiosity, excitement, and enthusiasm.

What’s The Difference Between You on January 1st of 2017 vs You Right Now?

  • If you were to write a short biography about yourself right now, what would you say?
  • How would you describe yourself? What the best thing about you?
  • How about one year ago? What are the main differences?

I learned to appreciate the things I've right now. It is the best I could've got. There is much more to accomplish; for which I'll keep working.

I completely respect and appreciate the things I've faced. It made me stronger, wiser & more confident.

I appreciate every single thing. Every single person that I know :)

What Are You Grateful For?

I am grateful to someone who made me realized that I wasn't good enough. That triggered enough fire which fueled enough motivation to keep me going throughout the years. Year after year, it never ends, just continues...

Alright, this marks the end of the post. Overall, the year was full of excitement, learnings, new friends, few vacations and what not :)

by Sanyam Khurana at December 31, 2017 07:25 AM

December 28, 2017

Shakthi Kannan

Ansible deployment of Graphite

[Published in Open Source For You (OSFY) magazine, July 2017 edition.]

Introduction

In this fifth article in the DevOps series we will learn to install and set up Graphite using Ansible. Graphite is a monitoring tool that was written by Chris Davis in 2006. It has been released under the Apache 2.0 license and comprises three components:

  1. Graphite-Web
  2. Carbon
  3. Whisper

Graphite-Web is a Django application and provides a dashboard for monitoring. Carbon is a server that listens to time-series data, while Whisper is a database library for storing the data.

Setting it up

A CentOS 6.8 Virtual Machine (VM) running on KVM is used for the installation. Please make sure that the VM has access to the Internet. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.2.1.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/graphite.yml
ansible/playbooks/admin/uninstall-graphite.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

graphite ansible_host=192.168.122.120 ansible_connection=ssh ansible_user=root ansible_password=password

Also, add an entry for the graphite host in /etc/hosts file as indicated below:

192.168.122.120 graphite

Graphite

The playbook to install the Graphite server is given below:

---
- name: Install Graphite software
  hosts: graphite
  gather_facts: true
  tags: [graphite]

  tasks:
    - name: Import EPEL GPG key
      rpm_key:
        key: http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
        state: present

    - name: Add YUM repo
      yum_repository:
        name: epel
        description: EPEL YUM repo
        baseurl: https://dl.fedoraproject.org/pub/epel/$releasever/$basearch/
        gpgcheck: yes

    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install Graphite server
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - graphite-web

We first import the keys for the Extra Packages for Enterprise Linux (EPEL) repository and update the software package list. The ‘graphite-web’ package is then installed using Yum. The above playbook can be invoked using the following command:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "graphite"

MySQL

A backend database is required by Graphite. By default, the SQLite3 database is used, but we will install and use MySQL as shown below:

- name: Install MySQL
  hosts: graphite
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [database]

  tasks:
    - name: Install database
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - mysql
        - mysql-server
        - MySQL-python
        - libselinux-python

    - name: Start mysqld server
      service:
        name: mysqld
        state: started

    - wait_for:
        port: 3306

    - name: Create graphite database user
      mysql_user:
        name: graphite
        password: graphite123
        priv: '*.*:ALL,GRANT'
        state: present

    - name: Create a database
      mysql_db:
        name: graphite
        state: present

    - name: Update database configuration
      blockinfile:
        path: /etc/graphite-web/local_settings.py
        block: |
          DATABASES = {
            'default': {
            'NAME': 'graphite',
            'ENGINE': 'django.db.backends.mysql',
            'USER': 'graphite',
            'PASSWORD': 'graphite123',
           }
          }

    - name: syncdb
      shell: /usr/lib/python2.6/site-packages/graphite/manage.py syncdb --noinput

    - name: Allow port 80
      shell: iptables -I INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

    - name:
      lineinfile:
        path: /etc/httpd/conf.d/graphite-web.conf
        insertafter: '           # Apache 2.2'
        line: '           Allow from all'

    - name: Start httpd server
      service:
        name: httpd
        state: started

As a first step, let’s install the required MySQL dependency packages and the server itself. We then start the server and wait for it to listen on port 3306. A graphite user and database is created for use with the Graphite Web application. For this example, the password is provided as plain text. In production, use an encrypted Ansible Vault password.

The database configuration file is then updated to use the MySQL credentials. Since Graphite is a Django application, the manage.py script with syncdb needs to be executed to create the necessary tables. We then allow port 80 through the firewall in order to view the Graphite dashboard. The graphite-web.conf file is updated to allow read access, and the Apache web server is started.

The above playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "database"

Carbon and Whisper

The Carbon and Whisper Python bindings need to be installed before starting the carbon-cache script.

- name: Install Carbon and Whisper
  hosts: graphite
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [carbon]

  tasks:
    - name: Install carbon and whisper
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - python-carbon
        - python-whisper

    - name: Start carbon-cache
      shell: /etc/init.d/carbon-cache start

The above playbook is invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/graphite.yml --tags "carbon"

Dashboard

You can open http://192.168.122.120 in the browser on the host to view the Graphite dashboard. A screenshot of the Graphite web application is shown below:

Graphite dashboard

Uninstall

An uninstall script to remove the Graphite server and its dependency packages is required for administration. The Ansible playbook for the same is available in playbooks/admin folder and is given below:

---
- name: Uninstall Graphite and dependencies
  hosts: graphite
  gather_facts: true
  tags: [remove]

  tasks:
    - name: Stop the carbon-cache server
      shell: /etc/init.d/carbon-cache stop

    - name: Uninstall carbon and whisper
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - python-whisper
        - python-carbon

    - name: Stop httpd server
      service:
        name: httpd
        state: stopped

    - name: Stop mysqld server
      service:
        name: mysqld
        state: stopped

    - name: Uninstall database packages
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - libselinux-python
        - MySQL-python
        - mysql-server
        - mysql
        - graphite-web

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-graphite.yml

References

  1. Graphite documentation. https://graphite.readthedocs.io/en/latest/

  2. Carbon. https://github.com/graphite-project/carbon

  3. Whisper database. http://graphite.readthedocs.io/en/latest/whisper.html

December 28, 2017 04:00 PM

December 20, 2017

Sanyam Khurana

Promoted to bug-triager for CPython

This is huge! I couldn't believe when I just woke up next to this mail:

Sanyam Khurana has been promoted

Yes, I got promoted to Developer role on bugs.python.org, which along with other privileges, provides access to close bugs. But with great powers, comes great responsibilities. Closing bugs means that the information is lost forever. So, utmost care is to be taken and reported on why a bug is closed (which might even involve writing code/scripts that prove it :)).

Victor Stinner is mentoring me and a few other folks who have been promoted to learn more about contributing to CPython code base. I've been reading the dev-guide and understanding the entire process to be followed.

Recently we've been also practicing to review Pull Requests along with reporting bugs and contributing code.

I hope to learn more about the process and contribute more to CPython. I wanted to write this post to specially thank Kushal Das and Nick Coghlan who helped me getting started with CPython in Feb 2017 during PyCon Pune sprints.

Also, thanks to Victor Stinner and Ezio Melotti for providing me those privileges.

I hope to get a better understanding of the code base and contribute more ;)

by Sanyam Khurana at December 20, 2017 03:04 PM

December 19, 2017

Kushal Das

Duplicate MAC address error in Qubes VMs

Just after I did the fresh install of Qubes 4.0rc3, I saw one error about sys-net (and sometimes same for other VMs) having a duplicate mac address for NIC. I rebooted the system for a few times, which solved the issue.

Start failed: invalid argument: network device with mac 00:16:3e:5e:6c:00 already exists

But, from the last week I started getting the same error again and again. Even if I use the qvm-prefs command to change the mac address, it is still trying to boot using the old address, I could not find the reason behind it. Rebooted the laptop way too many times with a hope of the error vanishing away, but of no use.

At first I checked the file /var/lib/qubes/qubes.xml for the duplicate record of the MAC address, but I found the right value there (the new one I set using the qvm-prefs command).

So, the next step was to remove the whole sys-net. As I forgot that I can not remove it till I remove all the dependency, my qvm-remove sys-net command will fail. I had to remove all dependencies using the Qubes Global Settings. Next, I removed and recreated the vm/domain and created a new one.

$ qvm-remove sys-net
$ sudo su -
# cd /srv/formulas/base/virtual-machines-formula/
# qubesctl top.enable qvm.sys-net
# qubesctl --targets sys-net state.highstate

I am yet to learn about Salt, I found a nice starting guide in the official Qubes documentation.

by Kushal Das at December 19, 2017 02:33 PM

December 05, 2017

Jaysinh Shukla

Book review ‘Docker Up & Running’

book image docker up and running

In the modern era of software engineering, terms are coined with a new wrapper. Such wrappers are required to make bread-and-butter out of it. Sometimes good marketed terms are adopted as best practices. I was having a lot of confusion about this Docker technology. Even I was unfamiliar with the concept of containers. My certain goal was to get a higher level overview first and then come to a conclusion. I started reading about the Docker from its official getting started guide. It helped me to host this blog using Docker, but I was expecting some more in-depth overview. With that reason, I decided to look for better resources. By reading some Quora posts and Goodreads reviews, I decided to read “Docker Up & Running by K. Matthias and S. Kane”. I am sharing my reading experience here.

TL;DR

The book provides a nice overview of Docker toolchain. It is not a reference book. Even though few options are deprecated, I will advise you to read this book and then refer the official documentation to get familiar with the latest development.

Detailed overview

I got a printed copy at nearly 450 INR (roughly rounding to 7 USD, where 1 USD = 65 INR) from Amazon. The prize is fairly acceptable with respect to the print quality. The book begins with a little history of containers (Docker is an implementation of the container). Initial chapters give a higher level overview of Docker tools combining Docker engine, Docker image, Docker registry, Docker compose and Docker container. Authors have pointed out situations where Docker is not suitable. I insist you do not skip that topic. I skipped the dedicated chapter on installing Docker. I will advise you to skip irrelevant topics because the chapters are not interlinked. You should read chapter 5 discussing the behavior of the container. That chapter cleared many of my confusions. Somehow I got lost in between, but re-reading helped. Such chapters are enough to get a general idea about Docker containers and images. Next chapters are focused more on best practices to setup the Docker engine. Frankly, I was not aware of possible ways to debug, log or monitor containers at runtime. This book points few expected production glitches that you should keep in mind. I didn’t like the depicted testing workflow by authors. I will look for some other references which highlight more strategies to construct your test workflow. If you are aware of any, please share them with me via e-mail. I know about achieving auto-scaling using various orchestration tools. This book provides step by step guidance on configuring and using them. Mentioned tools are Docker Swarm, Centurion and Amazon EC2 container service. Unfortunately, the book is missing Kubernets and Helios here. As a part of advanced topics, you will find a comparison of various filesystems with a shallow overview of how Docker engine interacts with them. The same chapter is discussing available execution drivers and introduces LXC as another container technology. This API option is deprecated by Docker version 1.8 which makes libcontainer the only dependency. I learned how Docker containers provide the virtualization layer using Namespaces. Docker limits the execution of container using CGroups (Control Groups). Namespaces and CGroups are GNU/Linux level dependencies used by Docker under the hood. If you are an API developer, then you should not skip Chapter 11. This chapter discusses two well-followed patterns Twelve-Factor App and The Reactive manifesto. These guidelines are helpful while designing the architecture of your services. The book concludes with further challenges of using Docker as a container tool.

One typo I found at page number 123, second last line.

expore some of the tools... 

Here, expore is a typo and it should be

explore some of the tools... 

I have submitted it to the official errata. At the time of writing this post, it has not confirmed by authors. Hope they will confirm it soon.

Who should read this book?

  • Developers who want to get an in-depth overview of the Docker technology.

  • If you set up deployment clusters using Docker, then this book will help you to get an overview of Docker engine internals. You will find security and performance guidelines.

  • This is not a reference book. If you are well familiar with Docker, then this book will not be useful. In that case, the Docker documentation is the best reference.

  • I assume Docker was not supporting Windows platform natively when the book was written. The book focuses on GNU/Linux platform. It highlights ways to run Docker on Windows using VMs and Boot2Docker for Non-Linux VM-based servers.

What to keep in mind?

  • Docker is changing rapidly. There will be situations where mentioned options are deprecated. In such situation, you have to browse the latest Docker documentation and try to follow them.

  • You will be able to understand the official documentation better after reading this book.

Conclusion

  • Your GNU/Linux skills are your Docker skills. Once you understand what the Docker is, then your decisions will become more mature.
Proofreaders: Dhavan Vaidya, Polprog

Printed Copy

by Jaysinh Shukla at December 05, 2017 05:56 AM

December 01, 2017

Shakthi Kannan

Ansible deployment of RabbitMQ

[Published in Open Source For You (OSFY) magazine, June 2017 edition.]

Introduction

In this fourth article in the DevOps series, we will learn to install RabbitMQ using Ansible. RabbitMQ is a free and open source message broker system that supports a number of protocols such as the Advanced Message Queuing Protocol (AMQP), Streaming Text Oriented Messaging Protocol (STOMP) and Message Queue Telemetry Transport (MQTT). The software has support for a large number of client libraries for different programming languages. RabbitMQ is written using the Erlang programming language and is released under the Mozilla Public License.

Setting it up

A CentOS 6.8 virtual machine (VM) running on KVM is used for the installation. Do make sure that the VM has access to the Internet. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.2.1.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/rabbitmq.yml
ansible/playbooks/admin/uninstall-rabbitmq.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

rabbitmq ansible_host=192.168.122.161 ansible_connection=ssh ansible_user=root ansible_password=password

Also, add an entry for the rabbitmq host in the /etc/hosts file as indicated below:

192.168.122.161 rabbitmq

Installation

RabbitMQ requires the Erlang environment, and uses the Open Telecom Platform (OTP) framework. There are multiple sources for installing Erlang - the EPEL repository, Erlang Solutions, zero-dependency Erlang provided by RabbitMQ. In this article, we will use the EPEL repository for installing Erlang.

---
- name: Install RabbitMQ server
  hosts: rabbitmq
  gather_facts: true
  tags: [server]

  tasks:
    - name: Import EPEL GPG key
      rpm_key:
        key: http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
        state: present

    - name: Add YUM repo
      yum_repository:
        name: epel
        description: EPEL YUM repo
        baseurl: https://dl.fedoraproject.org/pub/epel/$releasever/$basearch/
        gpgcheck: yes

    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install RabbitMQ server
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - rabbitmq-server

    - name: Start the RabbitMQ server
      service:
        name: rabbitmq-server
        state: started

    - wait_for:
        port: 5672

After importing the EPEL GPG key and adding the EPEL repository to the system, the yum update command is executed. The RabbitMQ server and its dependencies are then installed. We wait for the RabbitMQ server to start and to listen on port 5672. The above playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "server"

Dashboard

The RabbitMQ management user interface (UI) is available through plugins.

- name: Start RabbitMQ Management UI
  hosts: rabbitmq
  gather_facts: true
  tags: [ui]

  tasks:
    - name: Start management UI
      command: /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management

    - name: Restart RabbitMQ server
      service:
        name: rabbitmq-server
        state: restarted

    - wait_for:
        port: 15672

    - name: Allow port 15672
      shell: iptables -I INPUT 5 -p tcp --dport 15672 -m state --state NEW,ESTABLISHED -j ACCEPT

After enabling the management plugin, the server needs to be restarted. Since we are running it inside the VM, we need to allow the management user interface (UI) port 15672 through the firewall. The playbook invocation to set up the management UI is given below:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "ui"

The default user name and password for the dashboard are ‘guest:guest’. From your host system, you can start a browser and open http://192.168.122.161:15672 to view the login page as shown in Figure 1. The default ‘Overview’ page is shown in Figure 2.

RabbitMQ Login
RabbitMQ Overview

Ruby

We will use a Ruby client example to demonstrate that our installation of RabbitMQ is working fine. The Ruby Version Manager (RVM) will be used to install Ruby as shown below:

- name: Ruby client
  hosts: rabbitmq
  gather_facts: true
  tags: [ruby]

  tasks:
    - name: Import key
      command: gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

    - name: Install RVM
      shell: curl -sSL https://get.rvm.io | bash -s stable

    - name: Install Ruby
      shell: source /etc/profile.d/rvm.sh && rvm install ruby-2.2.6

    - name: Set default Ruby
      command: rvm alias create default ruby-2.2.6

    - name: Install bunny client
      shell: gem install bunny --version ">= 2.6.4"

After importing the required GPG keys, RVM and Ruby 2.2.6 are installed on the CentOS 6.8 VM. The bunny Ruby client for RabbitMQ is then installed. The Ansible playbook to setup Ruby is given below:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "ruby"

We shall create a ‘temperature’ queue to send the values in Celsius. The consumer.rb code to receive the values from the queue is given below:

#!/usr/bin/env ruby

require "bunny"

conn = Bunny.new(:automatically_recover => false)
conn.start

chan  = conn.create_channel
queue = chan.queue("temperature")

begin
  puts " ... waiting. CTRL+C to exit"
  queue.subscribe(:block => true) do |info, properties, body|
    puts " Received #{body}"
  end
rescue Interrupt => _
  conn.close

  exit(0)
end

The producer.rb code to send a sample of five values in degree Celsius is as follows:

#!/usr/bin/env ruby

require "bunny"

conn = Bunny.new(:automatically_recover => false)
conn.start

chan   = conn.create_channel
queue   = chan.queue("temperature")

values = ["33.5", "35.2", "36.7", "37.0", "36.4"]

values.each do |v|
  chan.default_exchange.publish(v, :routing_key => queue.name)
end
puts "Sent five temperature values."

conn.close

As soon as you start the consumer, you will get the following output:

$ ruby consumer.rb 
 ... waiting. CTRL+C to exit

You can then run the producer.rb script that writes the values to the queue:

$ ruby producer.rb

Sent five temperature values.

The received values at the consumer side are printed out as shown below:

$ ruby consumer.rb 
 ... waiting. CTRL+C to exit
 Received 33.5
 Received 35.2
 Received 36.7
 Received 37.0
 Received 36.4

We can observe the available connections and the created queue in the management user interface as shown in Figure 3 and Figure 4, respectively.

RabbitMQ Connections RabbitMQ Queues

Uninstall

It is good to have an uninstall script to remove the RabbitMQ server for administrative purposes. The Ansible playbook for the same is available in the playbooks/admin folder and is shown below:

---
- name: Uninstall RabbitMQ server
  hosts: rabbitmq
  gather_facts: true
  tags: [remove]

  tasks:
    - name: Stop the RabbitMQ server
      service:
        name: rabbitmq-server
        state: stopped

    - name: Uninstall rabbitmq
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - rabbitmq-server

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-rabbitmq.yml

You are encouraged to read the detailed documentation at https://www.rabbitmq.com/documentation.html to know more about the usage, configuration, client libraries and plugins available for RabbitMQ.

December 01, 2017 01:30 PM

November 30, 2017

Saptak Sengupta

Science Hack Day India, 2017

So, finally, managed to clear up some time to write about the best event of the year I have attended - Science Hack Day India, 2017. This was my second time to Science Hack Day, India. SHD 2016 was so phenomenal, there was no way I was missing it this time either. Phenomenal more because of the wonderful people I got to meet and really connect with because the entire atmosphere about the event is more like an informal, friendly unconference type. This year it was no different.


                Picture Credit: Sayan Chowdhury

Science Hack Day 2017 was truly bigger, better and even more fun than last year. Happening at one of the most happening venues, Sankalp Bhumi Farm, just the stay is so lovely, one doesn't need much other reason to attend it. Unlike last time, this year I had two friends accompanying me to Science Hack Day. We reached early morning in the 0th day. Like all conference, it was really good to meet everyone whom I personally was meeting maybe after 6 months, or an year or maybe for the very first time. There were general discussions about who is working on what and the new terminal emulator they are using, or the nginx trick that they might be using or the new great open source software they came across. But this is something everyone knows happens when techies meet. What mostly people don't know is thing like cycling and kayaking that we do. So most of the afternoon was rather spent in cycling and kayaking by everyone and having fun rather than any serious discussion at all. In the evening there was an informal introduction by everyone so that to get a little accustomed. After dinner, everyone bid goodnight and went to sleep.

But have you ever heard geeks sleeping just after dinner? Obviously not. So it was a matter of time before everyone again re-grouped at the hackerspace which was setup for the next day. Then I and Farhaan had the privilege of listening stories from a dreamy Sayan Chowdhury which marked the end of the day for us.

Next morning, after breakfast, it was time for mentor introduction which was followed by a great basic idea about how an aeroplane flight works. Reminded me of my science classes and I started wishing that we had similar explanation using a proper unmanned aircraft. And it wasn't just theory and theory, but we got to see that aircraft actually fly. This marked the actual notion of a hack day that we don't just talk, we make and also break stuff. After this it was time to start with our hacks. Unlike my plans before both me and my friends started working on assembling of a 3D printer which was mainly brought for Hackerspace Belgaum. I always thought what is the big deal in assembling but I realised I was so wrong.

The entire assembling took all day since we were doing for the first time and were figuring out stuff as we went with it. I mostly attached parts while my smarter friends figured everything out and told me what to attach where. By dinner it was ready and assembled. And I was like "Yay! Let's start printing". That is when Siddhesh told me that the trickiest part is yet to be done, that is calibration. So we got started with it. When calibration was all set and done it was time to print. So we decided to print the "Hello World" of 3D printing, i.e. a cube. So the cube started printing, the first layer got printed, the second layer got printed and by the third layer, everything came off. We realised the bed wasn't heating.

A little disappointed we settled for the day and went off to bed. Next day we decided to use glue to make the bed somewhat sticky. This time it printed. Not so perfectly but mostly all good. I have never been more excited to see a tiny little white cube and neither have I seen so many other people behave the same. After that it was time for rocket flying followed by a group photo. The event was marked with the project presentation by every team.

Hoping to come back again next year.

by SaptakS (noreply@blogger.com) at November 30, 2017 03:21 PM

November 14, 2017

Jaysinh Shukla

My experience of mentoring at Django Girls Bangalore 2017

group_photo

TL;DR

Last Sunday, Django Girls Bangalore organized a hands-on session of web programming. This is a small event report from my side.

Detailed overview

Django Girls is not for a profit initiative led by Ola Sitarska and Ola Sendecka. Such movement helps women to learn the skills of website development using the well-known web-framework Django. This community is backed by organizers from many countries. Organizations like The Python Software Foundation, Github, DjangoProject and many more are funding Django Girls.

Django Girls Bangalore chapter was organized by Sourav Singh and Kumar Anirudha. This was my second time mentoring for the Django Girls event. First was for the Ahmedabad chapter. The venue was sponsored by HackerEarth. 8 male and 2 female mentored 21 women during this event. Each mentor was assigned more or less 3 participants. Introducing participants with web development becomes easy with the help of Django Girls handbook. The Django Girls handbook is a combination of beginner-friendly hands-on tutorials described in a simple language. The handbook contains tutorials on basics of Python programming language to deploying your web application. Pupils under me were already ready by pre-configuring Python with their workstation. We started by introducing our selves. We took some time browsing the website of undersea cables. One of the amusing questions I got is, “Isn’t the world connected with Satellites?”. My team was comfortable with the Python, so we quickly skimmed to the part where I introduced them to the basics of web and then Django. I noticed mentors were progressing according to the convenience of participants. Nice amount of time was invested in discussing raised queries. During illumination, we heard a loud call for the lunch. A decent meal was served to all the members. I networked with other mentors and participants during the break. Post-lunch we created a blog app and configured it with our existing project. Overview of Django models topped with the concept of ORM became the arduous task to explain. With the time as a constraint, I focused on admin panel and taught girls about deploying their websites to PythonAnywhere. I am happy with the hard work done by my team. They were able to demonstrate what they did to the world. I was more joyful than them for such achievement.

The closing ceremony turned amusing event for us. 10 copies of Two scoops of Django book were distributed to the participants chosen by random draw strategy. I solemnly thank the authors of the book and Pothi.com for gifting such a nice reference. Participants shared their experiences of the day. Mentors pinpointed helpful resources to look after. They insisted girls would not stop at this point and open their wings by developing websites using skills they learned. T-shirts, stickers and badges were distributed as event swags.

You can find the list of all Django Girls chapters here. Djangonauts are encouraged to become a mentor for Django Girls events in your town. If you can’t finding any in your town, I encourage you to take the responsibility and organize one for your town. If you are already a part of Django Girls community, Why are you not sharing your experience with others?

Proofreaders: Kushal Das, Dhavan Vaidya, Isaul Vargas

by Jaysinh Shukla at November 14, 2017 02:31 AM

October 22, 2017

Samikshan Bairagya

Notes on Tensorflow and how it was used in ADTLib

Its been almost 2 years since I have been an amateur drummer (which apparently is also the time since my last blog) and I have always felt that it would be great to have something that can provide me with drum transcriptions from a given music source. I researched a bit and came across a library that provides an executable as well as an API that can be used to generate drum tabs (consisting of hi-hats, snare and the kick drum) from a music source. Its called ADTLib. This isn’t extremely accurate when one tests it and I’m sure the library will only get better with more data sources available to train the neural networks better but this was a definitely a good place to learn a bit about neural networks and libraries like Tensorflow. This blog post is basically meant serve as my personal notes wrt how Tensorflow has been used in ADTLib.

So to start off, the ADTLib source code actually doesn’t train any neural networks. What ADTLib essentially does is feed a music file through a pre-trained neural network to give us the automatic drum transcriptions in text as well as PDF form. We will need to start off by looking at two methods and one function inside https://github.com/CarlSouthall/ADTLib/blob/master/ADTLib/utils/__init__.py

  • methods create() and implement() belonging to class SA
  • function system_restore()

In system_restore() we initialise an instance of SA and call the create() method. There are a lot of parameters that are initialised when we create the neural network graph. We’ll not go into the details of those. Instead let’s look at how Tensorflow is used inside the SA.create() method. I would recommend reading this article on getting started with Tensorflow before going ahead with the next part of this blog post.

If you’ve actually gone through that article you’d know by now that Tensorflow creates graphs that implement a series of Tensorflow operations. The operations flow through the units of the graphs called ‘tensors’ and hence the name ‘tensorflow’. Great. So, getting back to the create() method, we find that first a tf.reset_default_graph() is called. This resets the global variables of the graph and clears the default graph stack.

Next we call a method weight_bias_init(). As the name suggests this method initialises the weights and biases for our model. In a neural network, weights and biases are parameters which can be trained so that the neural network outputs values that are closest to the target output. We can use ‘variables’ to initialise these trainable parameters in Tensorflow. Take these examples from the weight_bias_init() code:

  • self.biases = tf.Variable(tf.zeros([self.n_classes]))
  • self.weights =tf.Variable(tf.random_normal([self.n_hidden[(len(self.n_hidden)-1)]*2, self.n_classes]))

self.biases is set to variable with an initial value which is defined by the tensor returned by tf.zeros() (returns a tensor with dimension=2 where all elements are set to 0. self.n_classes is set to 2 in the ADTLib code). self.weights is initialised to a variable defined by the tensor returned by tf.random_normal(). tf.random_normal() returns a tensor of the mentioned shape with random normal values (type float32) with a mean of 0.0 and a standard deviation of 1.0. These weights and biases are trained based on the type of optimisation function later on. In ADTLib no training is actually done wrt to these weights and biases. These parameters are loaded from pre-trained neural networks as I’ve mentioned before. However we need these tensors defined in order to be able to implement the neural network on input music source.

Next we initialise a few ‘placeholders’ and ‘constants’. Placeholders and constants are again ‘tensors’ and resemble units of the graph. Example lines from the code:

  • self.biases = tf.Variable(tf.zeros([self.n_classes]))
  • self.seq=tf.constant(self.truncated,shape=[1])

Placeholders are used when a graph needs to be provided external inputs. They can be provided values later on. In the above example we define a placeholder that is supposed to hold ‘float32’ values in an array of dimension [1, 1000, 1024]. (Don’t worry about how I arrived at these dimensions. Basically if you check the init() method for class SA, you’ll understand that ‘self.batch’ is structure of dimension [1000, 1024]). Constants as the name suggests hold constant values. In the above example, self.truncated is initialised to 1000. ‘shape’ is an optional paramaters that specifies the dimension of the resulting tensor. Here the dimension is set to [1].

Now, ADTLib uses a special type of recurrent neural networks called bidirectional recurrent neural networks (BRNN). Here neurons or cells of a regular RNN are split into two directions, one for positive time direction(forward states), and another for negative time direction(backward states). Inside the create() method, we come across the following code:
self.outputs, self.states= tf.nn.bidirectional_dynamic_rnn(self.fw_cell,
self.bw_cell, self.x_ph,sequence_length=self.seq,dtype=tf.float32)

This creates the BRNN with the two types of cells provided as parameters, the input training data, the length of the sequence (which is 1000 in this case) and the data type. self.outputs is a tuple (output_fw, output_bw) containing the forward and the backward RNN output Tensor.

The forward and backward outputs are concatenated and fed to the second layer of the BRNN as follows:

self.first_out=tf.concat((self.outputs[0],self.outputs[1]),2)
self.outputs2, self.states2= tf.nn.bidirectional_dynamic_rnn(self.fw_cell2,
self.bw_cell2,self.first_out,sequence_length=self.seq2,dtype=tf.float32)

We now have the graph that defines how the BRNN should behave. These next few lines of code in the create() method deals with something called as soft-attention. This answer on stack overflow provides an easy introduction to this concept. Check it out if you want to but I’ll not go much into those details. But what happens essentially is that the forward and backward output cells from the second layer are again concatenated and then furthur processed to ultimately get a self.presoft value which resembles (W*x+b) as seen below.

self.zero_pad_second_out=tf.pad(tf.squeeze(self.second_out),[[self.attention_number,self.attention_number],[0,0]])
self.attention_m=[tf.tanh(tf.matmul(tf.concat((self.zero_pad_second_out[j:j+self.batch_size],tf.squeeze(self.first_out)),1),self.attention_weights[j])) for j in range((self.attention_number*2)+1)]
self.attention_s=tf.nn.softmax(tf.stack([tf.matmul(self.attention_m[i],self.sm_attention_weights[i]) for i in range(self.attention_number*2+1)]),0)
self.attention_z=tf.reduce_sum([self.attention_s[i]*self.zero_pad_second_out[i:self.batch_size+i] for i in range(self.attention_number*2+1)],0)
self.presoft=tf.matmul(self.attention_z,self.weights)+self.biases

Next we come across self.pred=tf.nn.softmax(self.presoft). This basically decides what activation function to use for the output layer. In this case softmax activation function is used. IMO this is a good reference for different kind of activation functions.

We now move onto the SA.implement() method. This function takes an input audio data, processed by madmom to create a spectrogram. Next self.saver.restore(sess, self.save_location+'/'+self.filename) loads the respective parameters from pre-trained neural network files for respective sounds (hi-hat/snare/kick). These Tensorflow save files can be found under ADTLib/files. Once the parameters are loaded, the Tensorflow graph is executed using sess.run() as following:
self.test_out.append(sess.run(self.pred, feed_dict={self.x_ph: np.expand_dims(self.batch,0),self.dropout_ph:1}))

When this function is executed we get the test results and further processing is done (this process is called peak-picking) to get the onsets data for the different percussive components.

I guess that’s it. There are a lot of details that I have omitted from this blog, mostly because it would make the blog way longer. I’d like to thank the author of ADTLib (Carl Southall) who cleared some icky doubts I had wrt to the ADTLib code. There is also a web version of ADTLib that has been developed with an aim to gather more data to train the networks better. So contribute data if you can!


by Samikshan Bairagya at October 22, 2017 03:16 AM

October 18, 2017

Subho

Understanding RapidJson – Part 2

In my previous blog on Rapidjson, alot of people asked for a detailed example in the comments so here is part 2 of Understanding Rapidjson with a slightly detailed example. I hope this will help you all.

We will straightaway improve on my last example in the previous blog and modify the changeDom function to add more complex object to the DOM tree.

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator());
subdoc.SetObject(); // starting the object
Value arr(kArrayType); // the innermost array
 Value::AllocatorType allocator;
for (unsigned i = 0; i < 10; i++)
arr.PushBack(i, allocator); // adding values to array , this function expects an allocator object
// adding the array to its parent object and so on , finally adding it to the parent doc object
subdoc.AddMember("New", Value(kObjectType).Move().AddMember("Numbers",arr, allocator), subdoc.GetAllocator());
d.AddMember("testing",subdoc, d.GetAllocator()); // finally adding the sub document to the main doc object
d["f"] = true;
d["t"].SetBool(false);
}

Here we are creating Value objects of type kArrayType and kObjectType and appending them to their parent node from innermost to outermost.

Before Manupulation
{
 "hello": "world",
 "t": true,
 "f": false,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
 0,
 1,
 2,
 3
 ]
}
After Manupulation
{
 "hello": "c++",
 "t": false,
 "f": true,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
    0,
    1,
    2,
    3
  ],
 "testing": {
     "New": {
         "Numbers": [
             0,
             1,
             2,
             3,
             4,
             5,
             6,
             7,
             8,
             9
         ]
     }
 }
}

The above changeDom can also be written using prettywritter object as follows:

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator()); // sub-document
// old school write the json element by element
StringBuffer s;
PrettyWriter<StringBuffer> writer(s);
writer.StartObject();
writer.String("New");
writer.StartObject();
writer.String("Numbers");
writer.StartArray();
for (unsigned i = 0; i < 10; i++)
writer.Uint(i);
writer.EndArray();
writer.EndObject();
writer.EndObject();
subdoc.Parse(s.GetString()); // Parsing the string written to buffer to form a sub DOM

d.AddMember("testing",subdoc, d.GetAllocator()); // Attaching the Sub DOM to the Main DOM object
d["f"] = true;
d["t"].SetBool(false);
}

Happy Coding! Cheers.

More reads:
https://stackoverflow.com/questions/32896695/rapidjson-add-external-sub-document-to-document


by subho at October 18, 2017 02:38 PM

September 28, 2017

Dhriti Shikhar

September Golang Bangalore Meetup

The September Golang Bangalore Meetup was conducted on Saturday, September 16, 2017 at DoSelect, Bengaluru. Around 25-30 people attended the meetup.

The meetup started at 10:15 with the first talk by Baiju Muthukadan who works at Red Hat India Pvt. Ltd., Bengaluru. He talked about “Testing techniques in Golang”.

IMG_20170916_102219

Karthikeyan Annamalai  gave a lightning talk about “Building microservice with gRPC”. The slides related to his talk can be found here.

karthik

Dinesh Kumar gave an awesome talk about “Gotcha’s in Golang”.  The slides related to his talk could be found here and the code explained during the demo is here.

IMG_20170916_115114.jpg

The last lightning talk of the meetup was by Akshat who works at Go-Jek. Akshat talked about “Building an asynchronous http client with retries and hystrix in golang“.

IMG_20170916_125031.jpg

I thank Sanket Saurav, Mohommad Rafy for helping us to organize the September Golang Bangalore Meetup by providing venue and food at DoSelect. Also, I thank Sudipta Sen for helping us out with the meetup preparation.


by Dhriti Shikhar at September 28, 2017 08:47 AM

Serverless Architecture

I attended Serverless Architecture Meetup  organized by Hasgeek on Saturday, September 23 which got me curious to learn more about Serverless architecture. The meetup was conducted at Walmart Labs, Bengaluru.

The first talk was by Akhilesh Singh who is a Senior Technical Consultant at Google. Akhilesh Talked about:

  • What is Serverless Architecture?
  • Evolution of serverless
  • Serverless vs IaaS model

Akhilesh was very proficient in not only explaining what is serverless architecture but also putting across his point of view about this trend.

The second talk was by Ganesh Samarthyam, Co-founder of CodeOps Technologies and Srushith Repakula, Software Engineer at CodeOps Technologies. Ganesh talked about how serverless architecture is applied in practice. Srushith showed a demo application for auto-retweeting written in Python which uses Apache OpenWhisk.

The most interesting part of the meetup was the Panel Discussion. The panel members were:

  • Akhilesh Singh
  • Ganesh Samarthyam
  • Joydeep Sen Sarma (Co-founder & CTO, Qubole)
  • Rishu Mehrotra (SRE Manager, LinkedIn)

During the meetup, a lot of questions were raised around:

  • Security in serverless architecture
  • How resources are utilized
  • Role of devOps in serverless architecture, etc

These are my notes on serverless architecture:

Servers

Conventionally, servers:

  • have fixed resources
  • are supposed to run all the time
  • are managed by system administrators

 

Problem with Servers

  1. When traffic increases, servers were not able to handle enormous amount of requests and would crash.

 

Paas

  1. To handle the above problem, Paas came into existence which offered scaling.
  2. This can be considered as the first iteration of Serverless
  3. You think about servers but you dont have to manage them

 

What does Server-less mean?

The word “server-less” doesnot mean -> no servers at all. It simply means elimination of ‘managing’ of servers.

 

What is Serverless?

  1. Serverless computing is a cloud computing execution model in which the cloud
    • manages allocation of machine resources
    • bills based on actual amount of resources consumed by application (rather than billing on pre-purchased units of capacity)

 

 

What problem does Serverless architecture solve?

  1. We build our applications around VM. We have a VM for each:
    • database
    • web
    • application
  2.  If  VM fails, a layer of our application fails
  3. Even if we break down into smaller containers or microservices, when these microservices or infrastructure fail, our application fails.

 

Advantages of Serverless architecture

1. Focus on application development rather than managing servers.

2. Serverless provisions are completely managed by providers using automated systems which eliminates the need of system administrators.

 

Stateless Nature of Serverless architecture

1. Serverless architectures are event driven.

2. This means for each event or request to server, a state is created.
After the request is served, the state is destroyed.

 

Problem with Statelessness

  1. There are different usecases for Stateless architecture. So, your application architecture needs to be redesigned according to the usecase.
  2. States can be stored across multiple requests with:
    • in memory db like redis
    • simple object storage
  3. This is slower than storing state in:
    • cache
    • RAM

 

Function As A Service (FaaS)

  1. A way to implement Serverless architecture
  2. What is a function?
    • Function is a small program that does one small thing
  3. Short lived functions are invoked upon each request and provider bills client for running each individual function.

 

Popular Faas Services

  1. AWS Lambda
  2. Google Cloud Functions
  3. IBM BlueMix OpenWhisk
  4. hook.io

 

FaaS vs Managed Servers

1. Similarity:
You dont have to manage the servers

2. Fundamental Difference:
In Faas, you dont need to manage server applications as well

 

Advantages of Faas

  1. Two FaaS functions written in different languages can interact with each other easily.
  2. Multiple functions can be connected and chained together to implement reusable components.

 

FaaS vs PaaS

Consider an e-commerce website. On a normal day, the traffic is average. But during holidays, we could expect a sudden surge in the traffic. In those cases, the server will not be able to serve so many requests and eventually crash. But this can be solved by scaling the server resources.

In Paas, scaling is provided. But you need to estimate how much resources you would need and then provision them accordingly. The problem with this is that you might over or under estimate. If you over estimate, then even on normal days, you would pay for unused resources. If you under estimate, then your server will crash when traffic increases.

In FaaS, the biggest USP is ‘automatic scaling’. You dont have to think about scaling. Automatic horizontal scaling is managed by the provider and is completely elastic.

 

Backend As A Service (BaaS)

  1. It integrates into FaaS architecture
  2. BaaS provides entire application component as a service like:
    • DB storage
    • push notifications
    • analytics

 

FaaS Cold Start Problem

  1. Cold starting a function in serverless platform takes a considerable amount of time to load.
  2. This is bad in the cases where certain functions are accessed infrequently.
  3. This can be overcomed by a process called ‘warming’ where in functions are invoked periodically.

 

FaaS Time Limit Problem

  1. FaaS Functions have time limit within which they have to run
  2. If they exceed it, they will be automatically killed.
  3. So, application should be redesigned to divide a long-lived function into multiple co-ordinated functions

 

Vendor lock-in

  1. This is the major disadvantage of FaaS..
  2. When you move from one provider to another, you will need to change your code accordingly.

 

Serverless Architecture

  1. Serverless goes a step beyond where you dont even have to think about capacity in advance.
  2.  You would generally run a monolith application on a PaaS.
  3. Serverless lets you break your application into small self contained programs (functions).
    • Example:
      • Each API end point can be a seperate function
  4. From operations perpective, the reason you would break down your app into functions is to scale and deploy seperately.
    • Example:
      • If one of your API endpoint has 90% of traffic, then that one bit of code/ function can be distributed and scaled much easier than your entire application.

by Dhriti Shikhar at September 28, 2017 08:16 AM

September 20, 2017

Sanjiban Bairagya

Randa 2017 Report – Marble Maps

Just came back home yesterday from Randa Meetings 2017. This year, even though my major motive for the sprint was to use Qt 5.8’s Qt Speech module instead of custom Java for text-to-speech during navigation, that could not be achieved because of a bug which made the routes not appear in the app in the first place. And this bug is reproducible both by using latest code, and old-enough code, and is even there in the prod app in the Google Play Store itself. So, although most of my time had gone in deep-diving on the issue, unfortunately I was not able to find the root-cause to it eventually. I will need to pick up on that in the coming weeks again when I get time, to get it fixed. Apart from that, I managed to fix a few more bugs, and work on adding a splash screen to the Android app:

  • Made the bookmarks dialog responsive to touch everywhere. When we tapped on the Search field in the app, if nothing was written in that field then the bookmarks dialog showed up there. If the number of bookmarks was <= 6, then the far-lower part of the dialog did nothing on being tapped. Made that portion choose the last item from the bookmarks list in that case.
  • There was an issue where pressing the back-button always led to closing the app instead of handling that event depending on the various scenarios. On looking, it was found that the root-cause for the issue was lying in the code of Kirigami itself. Reported the same to Marco Martin, and fixed the issue in the Marble app via a workaround by using Kirigami.AbstractApplicationWindow instead of Kirigami.ApplicationWindow in the Qml. Also reported the issue that svg icons were not showing up in the app on Plasma Mobile because of svg icons not being enabled there.

  • Worked on adding a splash screen to the Marble Maps app. The highest resolution of the Marble logo in png that we had at the moment was 192 x 192, which looked a bit blurry, as if it had been scaled up, so I just displayed it as 100×100 for the splash screen to make it look sharp. Later, Torsten created a 256×256 version of the png from an svg, which got rid of the blur in the previous png. So I added that in the splash screen later, and the icon there looks much bigger, sharper, and non-blurry now.
    wordpress

Apart from work, there was snow this year, did some interesting acro-yoga with Frederik, did a 3-hour hike to walk across the longest pedestrian suspension bridge in the world, went on a tour to Zermatt to catch a glimpse of the Matterhorn, and ended it all with having a delicious barbecue with Emmanuel and Tomaz somewhere up in the mountains on the final evening. Thanks to Mario for organizing the meetings and the cheese and chocolates, without which no work could have been done.

banner-fundraising2017


by sanjibanbairagya at September 20, 2017 02:41 AM

August 22, 2017

Anwesha Das

The mistakes I did in my blog posts

Today we will be discussing the mistakes I did with my blog posts.
I started (seriously) writing blogs a year back. A few of my posts got a pretty nice response. The praise put me in seventh heaven. I thought I was a fairly good blogger.But after almost a year of writing, one day I chanced upon one of my older posts and reading it sent me crashing down to earth.

There was huge list of mistakes I made

The post was a perfect example of TLDR. I previously used to judge a post based on quantity. The larger the number of words, the better! (Typical lawyer mentality!)

The title and the lead paragraph were vague.

The sentences were long (far too long).

There were plenty grammatical mistakes.

I lost the flow of thought, broke the logical chain in many places.

The measures I took to solve my problem

I was upset. I stopped writing for a month or so.
After the depressed, dispirited phase was over, I got back up, dusted myself off and tried to find out ways to make be a better writer.

Talks, books, blogs:

I searched for talks, writings, books on “how to write good blog posts” and started reading, and watching videos. I tried to follow those while writing my posts.

Earlier I used to take a lot of time (a week) to write each post. I used to flit from sentence to new sentence. I used to do that so I do not forget the latest idea or next thought that popped into my head.
But that caused two major problems:

First, the long writing time also meant long breaks. The interval broke my chain of thought anyway. I had to start again from the beginning. That resulted in confusing views and non-related sentences.

Secondly, it also caused the huge length of the posts.

Now I dedicate limited time, a few hours, for each post, depending on the idea.
And I strictly adhere to those hours. I use Tomato Timer to keep a check on the time. During that time I do not go to my web browser, check my phone, do any household activity and of course, ignore my husband completely.
But one thing I am not being able to avoid is, “Mamma no working. Let's play” situation. :)
I focus on the sentence I am writing. I do not jump between sentences. I’ve made peace with the fear of losing one thought and I do not disturb the one I am working on. This keeps my ideas clear.

To finish my work within the stipulated time
- I write during quieter hours, especially in the morning, - I plan what to write the day before, - am caffeinated while writing

Sometimes I can not finish it in one go. Then before starting the next day I read what I wrote previously, aloud.

Revision:

Previously after I finished writing, I used to correct only the red underlines. Now I take time and follow four steps before publishing a post:

  • correct the underlined places,
  • check grammar,
  • I read the post aloud at least twice. This helps me to hear my own words and correct my own mistakes.
  • I have some friends to check my post before publishing. An extra human eye to correct errors.

Respect the readers

This single piece of advice has changed my posts for better.
Respect the reader.
Don’t give them any false hopes or expectations.

With that in mind, I have altered the following two things in my blog:

Vague titles

I always thought out of the box, and figured that sarcastic titles would showcase my intelligence. A off hand, humourous title is good. How utterly wrong I was.

People search by asking relevant question on the topic.
Like for hardware () project with esp8266 using micropython people may search with
- “esp8266 projects” - “projects with micropython” - “fun hardware projects” etc. But no one will search with “mybunny uncle” (it might remind you of your kindly uncle, but definitely not a hardware project in any sense of the term).

People find your blogs by RSS feed or searching in any search engine.
So be as direct as possible. Give a title that describes core of the content. In the words of Cory Doctorow write your headlines as if you are a Wired service writer.

Vague Lead paragraph

Lead paragraph; the opening paragraph of your post must be explanatory of what follows. Many times, the lead paragraph is the part of the search result.

Avoid conjunctions and past participles

I attempt not to use any conjunction, connecting clauses or past participle tense. These make a sentence complicated to read.

Use simple words

I use simple, easy words in contrast to hard, heavy and huge words. It was so difficult to make the lawyer (inside me) understand that - “simple is better than complicated”.

The one thing which is still difficult for me is - to let go. To accept the fact all of my posts will not be great/good.
There will be faults in them, which is fine.
Instead of putting one’s effort to make a single piece better, I’d move on and work on other topics.

by Anwesha Das at August 22, 2017 03:18 AM