Planet dgplug

December 05, 2017

Jaysinh Shukla

Book review ‘Docker Up & Running’

book image docker up and running

In the modern era of software engineering, terms are coined with a new wrapper. Such wrappers are required to make bread-and-butter out of it. Sometimes good marketed terms are adopted as best practices. I was having a lot of confusion about this Docker technology. Even I was unfamiliar with the concept of containers. My certain goal was to get a higher level overview first and then come to a conclusion. I started reading about the Docker from its official getting started guide. It helped me to host this blog using Docker, but I was expecting some more in-depth overview. With that reason, I decided to look for better resources. By reading some Quora posts and Goodreads reviews, I decided to read “Docker Up & Running by K. Matthias and S. Kane”. I am sharing my reading experience here.

TL;DR

The book provides a nice overview of Docker toolchain. It is not a reference book. Even though few options are deprecated, I will advise you to read this book and then refer the official documentation to get familiar with the latest development.

Detailed overview

I got a printed copy at nearly 450 INR (roughly rounding to 7 USD, where 1 USD = 65 INR) from Amazon. The prize is fairly acceptable with respect to the print quality. The book begins with a little history of containers (Docker is an implementation of the container). Initial chapters give a higher level overview of Docker tools combining Docker engine, Docker image, Docker registry, Docker compose and Docker container. Authors have pointed out situations where Docker is not suitable. I insist you do not skip that topic. I skipped the dedicated chapter on installing Docker. I will advise you to skip irrelevant topics because the chapters are not interlinked. You should read chapter 5 discussing the behavior of the container. That chapter cleared many of my confusions. Somehow I got lost in between, but re-reading helped. Such chapters are enough to get a general idea about Docker containers and images. Next chapters are focused more on best practices to setup the Docker engine. Frankly, I was not aware of possible ways to debug, log or monitor containers at runtime. This book points few expected production glitches that you should keep in mind. I didn’t like the depicted testing workflow by authors. I will look for some other references which highlight more strategies to construct your test workflow. If you are aware of any, please share them with me via e-mail. I know about achieving auto-scaling using various orchestration tools. This book provides step by step guidance on configuring and using them. Mentioned tools are Docker Swarm, Centurion and Amazon EC2 container service. Unfortunately, the book is missing Kubernets and Helios here. As a part of advanced topics, you will find a comparison of various filesystems with a shallow overview of how Docker engine interacts with them. The same chapter is discussing available execution drivers and introduces LXC as another container technology. This API option is deprecated by Docker version 1.8 which makes libcontainer the only dependency. I learned how Docker containers provide the virtualization layer using Namespaces. Docker limits the execution of container using CGroups (Control Groups). Namespaces and CGroups are GNU/Linux level dependencies used by Docker under the hood. If you are an API developer, then you should not skip Chapter 11. This chapter discusses two well-followed patterns Twelve-Factor App and The Reactive manifesto. These guidelines are helpful while designing the architecture of your services. The book concludes with further challenges of using Docker as a container tool.

One typo I found at page number 123, second last line.

expore some of the tools... 

Here, expore is a typo and it should be

explore some of the tools... 

I have submitted it to the official errata. At the time of writing this post, it has not confirmed by authors. Hope they will confirm it soon.

Who should read this book?

  • Developers who want to get an in-depth overview of the Docker technology.

  • If you set up deployment clusters using Docker, then this book will help you to get an overview of Docker engine internals. You will find security and performance guidelines.

  • This is not a reference book. If you are well familiar with Docker, then this book will not be useful. In that case, the Docker documentation is the best reference.

  • I assume Docker was not supporting Windows platform natively when the book was written. The book focuses on GNU/Linux platform. It highlights ways to run Docker on Windows using VMs and Boot2Docker for Non-Linux VM-based servers.

What to keep in mind?

  • Docker is changing rapidly. There will be situations where mentioned options are deprecated. In such situation, you have to browse the latest Docker documentation and try to follow them.

  • You will be able to understand the official documentation better after reading this book.

Conclusion

  • Your GNU/Linux skills are your Docker skills. Once you understand what the Docker is, then your decisions will become more mature.
Proofreaders: Dhavan Vaidya, Polprog

Printed Copy

by Jaysinh Shukla at December 05, 2017 05:56 AM

December 01, 2017

Shakthi Kannan

Ansible deployment of RabbitMQ

[Published in Open Source For You (OSFY) magazine, June 2017 edition.]

Introduction

In this fourth article in the DevOps series, we will learn to install RabbitMQ using Ansible. RabbitMQ is a free and open source message broker system that supports a number of protocols such as the Advanced Message Queuing Protocol (AMQP), Streaming Text Oriented Messaging Protocol (STOMP) and Message Queue Telemetry Transport (MQTT). The software has support for a large number of client libraries for different programming languages. RabbitMQ is written using the Erlang programming language and is released under the Mozilla Public License.

Setting it up

A CentOS 6.8 virtual machine (VM) running on KVM is used for the installation. Do make sure that the VM has access to the Internet. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.2.1.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/rabbitmq.yml
ansible/playbooks/admin/uninstall-rabbitmq.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

rabbitmq ansible_host=192.168.122.161 ansible_connection=ssh ansible_user=root ansible_password=password

Also, add an entry for the rabbitmq host in the /etc/hosts file as indicated below:

192.168.122.161 rabbitmq

Installation

RabbitMQ requires the Erlang environment, and uses the Open Telecom Platform (OTP) framework. There are multiple sources for installing Erlang - the EPEL repository, Erlang Solutions, zero-dependency Erlang provided by RabbitMQ. In this article, we will use the EPEL repository for installing Erlang.

---
- name: Install RabbitMQ server
  hosts: rabbitmq
  gather_facts: true
  tags: [server]

  tasks:
    - name: Import EPEL GPG key
      rpm_key:
        key: http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
        state: present

    - name: Add YUM repo
      yum_repository:
        name: epel
        description: EPEL YUM repo
        baseurl: https://dl.fedoraproject.org/pub/epel/$releasever/$basearch/
        gpgcheck: yes

    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install RabbitMQ server
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - rabbitmq-server

    - name: Start the RabbitMQ server
      service:
        name: rabbitmq-server
        state: started

    - wait_for:
        port: 5672

After importing the EPEL GPG key and adding the EPEL repository to the system, the yum update command is executed. The RabbitMQ server and its dependencies are then installed. We wait for the RabbitMQ server to start and to listen on port 5672. The above playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "server"

Dashboard

The RabbitMQ management user interface (UI) is available through plugins.

- name: Start RabbitMQ Management UI
  hosts: rabbitmq
  gather_facts: true
  tags: [ui]

  tasks:
    - name: Start management UI
      command: /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management

    - name: Restart RabbitMQ server
      service:
        name: rabbitmq-server
        state: restarted

    - wait_for:
        port: 15672

    - name: Allow port 15672
      shell: iptables -I INPUT 5 -p tcp --dport 15672 -m state --state NEW,ESTABLISHED -j ACCEPT

After enabling the management plugin, the server needs to be restarted. Since we are running it inside the VM, we need to allow the management user interface (UI) port 15672 through the firewall. The playbook invocation to set up the management UI is given below:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "ui"

The default user name and password for the dashboard are ‘guest:guest’. From your host system, you can start a browser and open http://192.168.122.161:15672 to view the login page as shown in Figure 1. The default ‘Overview’ page is shown in Figure 2.

RabbitMQ Login
RabbitMQ Overview

Ruby

We will use a Ruby client example to demonstrate that our installation of RabbitMQ is working fine. The Ruby Version Manager (RVM) will be used to install Ruby as shown below:

- name: Ruby client
  hosts: rabbitmq
  gather_facts: true
  tags: [ruby]

  tasks:
    - name: Import key
      command: gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3

    - name: Install RVM
      shell: curl -sSL https://get.rvm.io | bash -s stable

    - name: Install Ruby
      shell: source /etc/profile.d/rvm.sh && rvm install ruby-2.2.6

    - name: Set default Ruby
      command: rvm alias create default ruby-2.2.6

    - name: Install bunny client
      shell: gem install bunny --version ">= 2.6.4"

After importing the required GPG keys, RVM and Ruby 2.2.6 are installed on the CentOS 6.8 VM. The bunny Ruby client for RabbitMQ is then installed. The Ansible playbook to setup Ruby is given below:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/rabbitmq.yml --tags "ruby"

We shall create a ‘temperature’ queue to send the values in Celsius. The consumer.rb code to receive the values from the queue is given below:

#!/usr/bin/env ruby

require "bunny"

conn = Bunny.new(:automatically_recover => false)
conn.start

chan  = conn.create_channel
queue = chan.queue("temperature")

begin
  puts " ... waiting. CTRL+C to exit"
  queue.subscribe(:block => true) do |info, properties, body|
    puts " Received #{body}"
  end
rescue Interrupt => _
  conn.close

  exit(0)
end

The producer.rb code to send a sample of five values in degree Celsius is as follows:

#!/usr/bin/env ruby

require "bunny"

conn = Bunny.new(:automatically_recover => false)
conn.start

chan   = conn.create_channel
queue   = chan.queue("temperature")

values = ["33.5", "35.2", "36.7", "37.0", "36.4"]

values.each do |v|
  chan.default_exchange.publish(v, :routing_key => queue.name)
end
puts "Sent five temperature values."

conn.close

As soon as you start the consumer, you will get the following output:

$ ruby consumer.rb 
 ... waiting. CTRL+C to exit

You can then run the producer.rb script that writes the values to the queue:

$ ruby producer.rb

Sent five temperature values.

The received values at the consumer side are printed out as shown below:

$ ruby consumer.rb 
 ... waiting. CTRL+C to exit
 Received 33.5
 Received 35.2
 Received 36.7
 Received 37.0
 Received 36.4

We can observe the available connections and the created queue in the management user interface as shown in Figure 3 and Figure 4, respectively.

RabbitMQ Connections RabbitMQ Queues

Uninstall

It is good to have an uninstall script to remove the RabbitMQ server for administrative purposes. The Ansible playbook for the same is available in the playbooks/admin folder and is shown below:

---
- name: Uninstall RabbitMQ server
  hosts: rabbitmq
  gather_facts: true
  tags: [remove]

  tasks:
    - name: Stop the RabbitMQ server
      service:
        name: rabbitmq-server
        state: stopped

    - name: Uninstall rabbitmq
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - rabbitmq-server

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-rabbitmq.yml

You are encouraged to read the detailed documentation at https://www.rabbitmq.com/documentation.html to know more about the usage, configuration, client libraries and plugins available for RabbitMQ.

December 01, 2017 01:30 PM

November 30, 2017

Kushal Das

Setting up SecureDrop 0.5rc2 in VMs for QA

Next week we have the 0.5 release of SecureDrop. SecureDrop is an open-source whistleblower submission system that media organizations can use to securely accept documents from and communicate with anonymous sources. It was originally created by the late Aaron Swartz and is currently managed by Freedom of the Press Foundation.

In this blog post I am going to tell you how can you set up a production instance of SecureDrop in VM(s) in your computer, and help us to test the system for the new release.

Required software

We provision our VM(s) using Vagrant. You will also need access to a GPG key (along with the private key) to test the whole workflow. The set up is done using Ansible playbooks.

Another important piece is a Tails VM for the administrator/journalist workstation. Download the latest (Tails 3.3) ISO from their website.

You will need at least 8GB RAM in your system so that you can have the 3 VM(s) required to test the full system.

Get the source code

For our test, we will first set up a SecureDrop 0.4.4 production system, and then we will update that to the 0.5rc release.

Clone the SecureDrop repository in a directory in your local computer. And then use the following commands to set up two VM(s). One of the VM is for the application server, and the other VM is the monitor server.

$ vagrant up /prod/ --no-provision

In case you don’t have the right image file for KVM, you can convert the Virtualbox image following this blog post.

Create a Tails VM

Follow this guide to create a virtualized Tails environment.

After the boot, remember to create a Persistence storage, and also setup a administrator password (you will have to provide the administrator password everytime you boot the Tails VM).

For KVM, remember to mark the drive as a removable USB storage and also mark it in the Booting Options section after the installation.

Then, you can mount the SecureDrop git repository inside the Tails VM, I used this guide for the same.

Also remember to change the Virtual Network Interface in the virt-manager to Virtual network ‘securedrop0’: NAT for the Tails VM.

Install SecureDrop 0.4.4 release in the production VM(s).

For the next part of the tutorial, I am assuming that the source code is at the ~/Persistent/securedrop directory.

Move to 0.4.4 tag

$ git checkout 0.4.4

We will also have remove a validation role from the 0.4.4 Ansible playbook, otherwise it will fail on a Tails 3.3 system.

diff --git a/install_files/ansible-base/securedrop-prod.yml b/install_files/ansible-base/securedrop-prod.yml
index 877782ff..37b27c14 100755
--- a/install_files/ansible-base/securedrop-prod.yml
+++ b/install_files/ansible-base/securedrop-prod.yml
@@ -11,8 +11,6 @@
# Don't clobber new vars file with old, just create it.
args:
creates: "{{ playbook_dir }}/group_vars/all/site-specific"
- roles:
- - { role: validate, tags: validate }
- name: Add FPF apt repository and install base packages.
hosts: securedrop

Create the configuration

In the host system make sure that you export your GPG public key to a file in the SecureDrop source directory, for my example I stored it in install_files/ansible-base/kushal.pub. I also have the exported insecure key from Vagrant. You can find that key at ~/.vagrant.d/insecure_private_key in your host system. Make sure to copy that file too in the SecureDrop source directory so that we can later access it from the Tails VM.

Inside of the Tails VM, give the following command to setup the dependencies.

$ ./securedrop-admin setup

Next, we will use the sdconfig command to create the configuration file.

$ ./securedrop-admin sdconfig

The above command will ask you many details, you can use the defaults in most cases. I am pasting my configuration file below, so that you can look at the example values I am using. The IP addresses are the default address for the production Vagrant VM(s). You should keep them the same as mine.

---
### Used by the common role ###
ssh_users: vagrant
dns_server: 8.8.8.8
daily_reboot_time: 4 # An integer between 0 and 23

# TODO Should use ansible to gather this info
monitor_ip: 10.0.1.5
monitor_hostname: mon
app_hostname: app
app_ip: 10.0.1.4

### Used by the app role ###
# The securedrop_header_image has to be in the install_files/ansible-base/ or
# the install_files/ansible-base/roles/app/files/ directory
# Leave set to empty to use the SecureDrop logo.
securedrop_header_image: ""
# The app GPG public key has to be in the install_files/ansible-base/ or
# install_files/ansible-base/roles/app/files/ directory
#
# The format of the app GPG public key can be binary or ASCII-armored,
# the extension also doesn't matter
#
# The format of the app gpg fingerprint needs to be all capital letters
# and zero spaces, e.g. "B89A29DB2128160B8E4B1B4CBADDE0C7FC9F6818"
securedrop_app_gpg_public_key: kushal.pub
securedrop_app_gpg_fingerprint: A85FF376759C994A8A1168D8D8219C8C43F6C5E1

### Used by the mon role ###
# The OSSEC alert GPG public key has to be in the install_files/ansible-base/ or
# install_files/ansible-base/roles/app/files/ directory
#
# The format of the OSSEC alert GPG public key can be binary or
# ASCII-armored, the extension also doesn't matter
#
# The format of the OSSEC alert GPG fingerprint needs to be all capital letters
# and zero spaces, e.g. "B89A29DB2128160B8E4B1B4CBADDE0C7FC9F6818"
ossec_alert_gpg_public_key: kushal.pub
ossec_gpg_fpr: A85FF376759C994A8A1168D8D8219C8C43F6C5E1
ossec_alert_email: kushaldas@gmail.com
smtp_relay: smtp.gmail.com
smtp_relay_port: 587
sasl_username: fakeuser
sasl_domain: gmail.com
sasl_password: fakepassword

### Use for backup restores ###
# If the `restore_file` variable is defined, Ansible will overwrite the state of
# the app server with the state from the restore file, which should have been
# created by a previous invocation of the "backup" role.
# To use uncomment the following line and enter the filename between the quotes.
# e.g. restore_file: "sd-backup-2015-01-15--21-03-32.tar.gz"
#restore_file: ""
securedrop_app_https_on_source_interface: False
securedrop_supported_locales: []

Starting the actual installation

Use the following two commands to start the installation.

$ ssh-add insecure_private_key
$ ./securedrop-admin install

Then wait for a while for the installation to finish.

Configure the Tails VM as a admin workstation

$ ./securedrop-admin tailsconfig

The above command expects that the previous installation step finished without any issue. The addresses for the source and journalist interfaces can be found in the install_files/ansible-base/*ths files at this moment.

After this command, you should see two desktop shortcuts on your Tails desktop, one pointing to the source interface, and one for journalist interface. Double click on the source interface and make sure that you can view the source interface and the SecureDrop version mentioned in the page is 0.4.4.

Now update the systems to the latest SecureDrop rc release

The following commands in the Tails VM will help you to update to the latest RC release.

$ source .venv/bin/activate
$ cd install_files/ansible-base
$ torify wget https://gist.githubusercontent.com/conorsch/e7556624df59b2a0f8b81f7c0c4f9b7d/raw/86535a6a254e4bd72022865612d753042711e260/securedrop-qa.yml`
$ ansible-playbook -vv --diff securedrop-qa.yml

Then we will SSH into both app and mon VM(s), and give the following the command to update to the latest RC.

$ sudo cron-apt -i -s

Note: You can use ssh app and ssh mon to connect to the systems. You can also checkout the release/0.5 branch and rerun the tailsconfig command. That will make sure the desktop shortcuts are trusted by default.

After you update both the systems, if you reopen the source interface in the Tails VM again, you should see version mentioned as a RC release.

Now, if you open up the source interface onion address in the Tor browser on your computer, you should be able to submit documents/messages.

SecureDrop hackathon at EFF office next week

On December 7th from 6PM we are having a SecureDrop hackathon at the EFF office. Please RSVP and come over to start contributing to SecureDrop.

by Kushal Das at November 30, 2017 10:36 PM

Saptak Sengupta

Science Hack Day India, 2017

So, finally, managed to clear up some time to write about the best event of the year I have attended - Science Hack Day India, 2017. This was my second time to Science Hack Day, India. SHD 2016 was so phenomenal, there was no way I was missing it this time either. Phenomenal more because of the wonderful people I got to meet and really connect with because the entire atmosphere about the event is more like an informal, friendly unconference type. This year it was no different.


                Picture Credit: Sayan Chowdhury

Science Hack Day 2017 was truly bigger, better and even more fun than last year. Happening at one of the most happening venues, Sankalp Bhumi Farm, just the stay is so lovely, one doesn't need much other reason to attend it. Unlike last time, this year I had two friends accompanying me to Science Hack Day. We reached early morning in the 0th day. Like all conference, it was really good to meet everyone whom I personally was meeting maybe after 6 months, or an year or maybe for the very first time. There were general discussions about who is working on what and the new terminal emulator they are using, or the nginx trick that they might be using or the new great open source software they came across. But this is something everyone knows happens when techies meet. What mostly people don't know is thing like cycling and kayaking that we do. So most of the afternoon was rather spent in cycling and kayaking by everyone and having fun rather than any serious discussion at all. In the evening there was an informal introduction by everyone so that to get a little accustomed. After dinner, everyone bid goodnight and went to sleep.

But have you ever heard geeks sleeping just after dinner? Obviously not. So it was a matter of time before everyone again re-grouped at the hackerspace which was setup for the next day. Then I and Farhaan had the privilege of listening stories from a dreamy Sayan Chowdhury which marked the end of the day for us.

Next morning, after breakfast, it was time for mentor introduction which was followed by a great basic idea about how an aeroplane flight works. Reminded me of my science classes and I started wishing that we had similar explanation using a proper unmanned aircraft. And it wasn't just theory and theory, but we got to see that aircraft actually fly. This marked the actual notion of a hack day that we don't just talk, we make and also break stuff. After this it was time to start with our hacks. Unlike my plans before both me and my friends started working on assembling of a 3D printer which was mainly brought for Hackerspace Belgaum. I always thought what is the big deal in assembling but I realised I was so wrong.

The entire assembling took all day since we were doing for the first time and were figuring out stuff as we went with it. I mostly attached parts while my smarter friends figured everything out and told me what to attach where. By dinner it was ready and assembled. And I was like "Yay! Let's start printing". That is when Siddhesh told me that the trickiest part is yet to be done, that is calibration. So we got started with it. When calibration was all set and done it was time to print. So we decided to print the "Hello World" of 3D printing, i.e. a cube. So the cube started printing, the first layer got printed, the second layer got printed and by the third layer, everything came off. We realised the bed wasn't heating.

A little disappointed we settled for the day and went off to bed. Next day we decided to use glue to make the bed somewhat sticky. This time it printed. Not so perfectly but mostly all good. I have never been more excited to see a tiny little white cube and neither have I seen so many other people behave the same. After that it was time for rocket flying followed by a group photo. The event was marked with the project presentation by every team.

Hoping to come back again next year.

by SaptakS (noreply@blogger.com) at November 30, 2017 03:21 PM

November 20, 2017

Shakthi Kannan

Ansible deployment of Cacti for Monitoring

[Published in Open Source For You (OSFY) magazine, May 2017 edition.]

Introduction

In this third article in the DevOps series, we will install and set up Cacti, a free and open source Web-based network monitoring and graphing tool, using Ansible. Cacti is written in PHP and uses the MySQL database as a backend. It uses the RRDtool (Round-Robin Database tool) to handle time series data and has built-in SNMP support. Cacti has been released under the GNU General Public License.

Setting up Cacti

We will use a CentOS 6.8 virtual machine (VM) running on KVM to setup Cacti. Just for this demonstration, we will disable SELinux. You will need to set the following in /etc/selinux/config and reboot the VM.

SELINUX=disabled

When using in production, it is essential that you enable SELinux. You should then test for Internet connectivity from within the VM.

The Ansible version used on the host Parabola GNU/Linux-libre x86_64 is 2.2.1.0. The ansible/inventory/kvm/ directory structure is shown below:

ansible/inventory/kvm/inventory
ansible/inventory/kvm/group_vars/all/all.yml

The IP address of the guest CentOS 6.8 VM is provided in the inventory file as shown below:

centos ansible_host=192.168.122.98 ansible_connection=ssh ansible_user=root ansible_password=password

Add an entry for ‘centos’ in the /etc/hosts file as indicated below:

192.168.122.98 centos

The contents of the all.yml for use with the playbook are as follows:

---
mysql_cacti_password_hash: "{{ vault_mysql_cacti_password_hash }}"

mysql_username: "{{ vault_mysql_user }}"
mysql_password: "{{ vault_mysql_password }}"

The cacti.yml playbook is located in the ansible/playbooks/configuration folder.

Vault

Ansible provides the Vault feature, which allows you to store sensitive information like passwords in encrypted files. You can set the EDITOR environment variable to the text editor of your choice, as shown below:

$ export EDITOR=nano

In order to store our MySQL database credentials, we will create a vault.yml file as indicated below:

$ ansible-vault create inventory/kvm/group_vars/all/vault.yml

Provide a password when prompted, following which, the Nano text editor will open. You can enter the following credentials and save the file.

---
vault_mysql_cacti_password_hash: "*528573A4E6FE4F3E8B455F2F060EB6F63ECBECAA"

vault_mysql_user: "cacti"
vault_mysql_password: "cacti123"

You can edit the same file, if you wish, using the following command:

$ ansible-vault edit inventory/kvm/group_vars/all/vault.yml

It will prompt you for a password, and on successful authentication, your text editor will open with the decrypted file contents for editing.

Apache

Cacti has many dependency packages, and the first software that we will install is the Apache HTTP server.

---
- name: Install web server
  hosts: centos
  gather_facts: true
  tags: [httpd]

  tasks:
    - name: Update the software package repository
      yum:
	name: '*'
	update_cache: yes

    - name: Install HTTP packages
      package:
	name: "{{ item }}"
	state: latest
      with_items:
	- wget
	- nano
	- httpd
	- httpd-devel

    - name: Start the httpd server
      service:
	name: httpd
	state: started

    - wait_for:
	port: 80

A ‘yum update’ is first performed to sync with the package repositories. The httpd Web server and a few other packages are then installed. The server is started, and the Ansible playbook waits for the server to listen on port 80.

MySQL and PHP

The MySQL, PHP and RRDTool packages are then installed, following which the SNMP and MySQL servers are started as shown below:

- name: Install MySQL, PHP packages
  hosts: centos
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [database-web]

  tasks:
    - name: Install database/web packages
      package:
	name: "{{ item }}"
	state: latest
      with_items:
	- mysql
	- mysql-server
	- MySQL-python
	- php-mysql
	- php-pear
	- php-common
	- php-gd
	- php-devel
	- php
	- php-mbstring
	- php-cli
	- php-process
	- php-snmp
	- net-snmp-utils
	- net-snmp-libs
	- rrdtool

    - name: Start snmpd server
      service:
	name: snmpd
	state: started

    - name: Start mysqld server
      service:
	name: mysqld
	state: started

    - wait_for:
	port: 3306

Cacti

Cacti is available in the EPEL repository for CentOS. The GPG key for the CentOS repositories is enabled before installing the EPEL repository. A ‘yum update’ is performed and the Cacti package is installed. A ‘cacti’ user is then created in the MySQL database.

- name: Install Cacti
  hosts: centos
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [cacti]

  tasks:
    - name: Import EPEL GPG key
      rpm_key:
	key: http://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6
	state: present

    - name: Add YUM repo
      yum_repository:
	name: epel
	description: EPEL YUM repo
	baseurl: https://dl.fedoraproject.org/pub/epel/$releasever/$basearch/
	gpgcheck: yes

    - name: Update the software package repository
      yum:
	name: '*'
	update_cache: yes

    - name: Install cacti
      package:
	name: "{{ item }}"
	state: latest
      with_items:
	- cacti

    - name: Create cacti database user
      mysql_user:
	name: cacti
	password: "{{ mysql_cacti_password_hash }}"
	encrypted: yes
	priv: '*.*:ALL,GRANT'
	state: present

Fixing a bug

The time zone data is missing in this MySQL version (5.1.73-8). In order to resolve this bug, the mysql_test_data_timezone.sql file needs to be imported and the ‘cacti’ user needs to be given the SELECT privilege to do this.

- name: For bug https://github.com/Cacti/cacti/issues/242
  hosts: centos
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [bug]

  tasks:
    - name: Import mysql_test_data_timezone.sql
      mysql_db:
	state: import
	name: mysql
	target: /usr/share/mysql/mysql_test_data_timezone.sql

    - name: Grant privileges
      mysql_user:
	name: cacti
	append_privs: true
	priv: 'mysql.time_zone_name:SELECT'
	state: present

It is a good practice to have a separate playbook for such exceptional cases. In future, when you upgrade to newer versions that have bug fixes, you can simply skip this step.

Configuration

The last step involves configuring Cacti.

- name: Configuration
  hosts: centos
  become: yes
  become_method: sudo
  gather_facts: true
  tags: [config]

  tasks:
    - name: Create a database for cacti
      mysql_db:
	name: cacti
	state: present

    - name: Import cacti.sql
      mysql_db:
	state: import
	name: cacti
	target: /usr/share/doc/cacti-1.0.4/cacti.sql

    - name: Update database credentials in config file
      lineinfile:
	dest: /etc/cacti/db.php
	regexp: "{{ item.regexp }}"
	line: "{{ item.line }}"
      with_items:
	- { regexp: '^\$database_username', line: "$database_username = '{{ mysql_username }}';" }
	- { regexp: '^\$database_password', line: "$database_password = '{{ mysql_password }}';" }

    - name: Allow port 80
      shell: iptables -I INPUT 5 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

    - name: Update access in cacti.conf for httpd
      replace:
	dest: /etc/httpd/conf.d/cacti.conf
	regexp: "{{ item.regexp }}"
	replace: "{{ item.replace }}"
      with_items:
	- { regexp: 'Require host localhost', replace: 'Require all granted' }
	- { regexp: 'Allow from localhost', replace: 'Allow from all' }

    - lineinfile:
	dest: /etc/cron.d/cacti
	regexp: '^#(.*)$'
	line: '\1'
	backrefs: yes    

    - name: Start mysqld server
      service:
	name: mysqld
	state: restarted

    - wait_for:
	port: 3306

    - name: Start the httpd server
      service:
	name: httpd
	state: restarted

    - wait_for:
	port: 80

A database called ‘cacti’ is created for the application, and the cacti.sql file is imported into it. The database credentials are updated for the Cacti application. The firewall rules are then updated to allow incoming HTTP requests for port 80. The periodic cron poller is then enabled in /etc/cron.d/cacti:

*/5 * * * *     cacti   /usr/bin/php /usr/share/cacti/poller.php > /dev/null 2>&1

The MySQL and HTTP servers are then restarted.

The result

The entire playbook can now be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/cacti.yml --ask-vault-pass

It will prompt you for the Vault password, following which all the playbooks will be completed. You can then open http://192.168.122.98/cacti to accept the GNU General Public License agreement. After you agree to the terms of the license, click ‘Next’. The Cacti installation wizard shows the pre-installation checks, which should not have any errors. This is followed by the selection of the installation type, binary location, version, and the directory permission checks. You can then decide on the templates you would like to set up, following which a user login is provided. The default user name and password is ‘admin:admin’, and you will be immediately prompted to change the password after logging in. You can then proceed to log in to the Cacti dashboard. Figures 1 to 8 give the screenshots of the Cacti Web UI installation for reference.

License Agreement
Pre-installation checks
Installation type
Binary location and version
Directory permission checks
Template setup
User login
Change password

A screenshot of Cacti graphing for memory usage is shown in Figure 9.

Cacti Web UI

November 20, 2017 07:30 PM

November 16, 2017

Kushal Das

PyConf Hyderabad 2017

In the beginning of October, I attended a new PyCon in India, PyConf Hyderabad (no worries, they are working on the name for the next year). I was super excited about this conference, the main reason is being able to meet more Python developers from India. We are a large country, and we certainly need more local conferences :)

We reached the conference hotel a day before the event starts along with Py. The first day of the conference was workshop day, we reached the venue on time to say hi to everyone. Meet the team at the conference and many old friends. It was good to see that folks traveled from all across the country to volunteer for the conference. Of course, we had a certain number of dgplug folks there :)

In the conference day, Anwesha and /my setup in the PSF booth, and talked to the attendees. During the lighting talk session, Sayan and Anwesha introduced PyCon Pune, and they also opened up the registration during the lighting talk :). I attended Chandan Kumar’s talk about his journey into upstream projects. Have to admit that I feel proud to see all the work he has done.

Btw, I forgot to mention that lunch at PyConf Hyderabad was the best conference food ever. They had some amazing biryani :).

The last talk of the day was my keynote titled Free Software movement & current days. Anwesha and I wrote an article on the history of Free Software a few months back, and that the talk was based on that. This was also the first time I spoke about Freedom of the Press Foundation (attended my first conference as the FPF staff member).

The team behind the conference did some amazing groundwork to make this conference happening. It was a good opportunity to meet the community, and make new friends.

by Kushal Das at November 16, 2017 04:25 AM

November 14, 2017

Jaysinh Shukla

My experience of mentoring at Django Girls Bangalore 2017

group_photo

TL;DR

Last Sunday, Django Girls Bangalore organized a hands-on session of web programming. This is a small event report from my side.

Detailed overview

Django Girls is not for a profit initiative led by Ola Sitarska and Ola Sendecka. Such movement helps women to learn the skills of website development using the well-known web-framework Django. This community is backed by organizers from many countries. Organizations like The Python Software Foundation, Github, DjangoProject and many more are funding Django Girls.

Django Girls Bangalore chapter was organized by Sourav Singh and Kumar Anirudha. This was my second time mentoring for the Django Girls event. First was for the Ahmedabad chapter. The venue was sponsored by HackerEarth. 8 male and 2 female mentored 21 women during this event. Each mentor was assigned more or less 3 participants. Introducing participants with web development becomes easy with the help of Django Girls handbook. The Django Girls handbook is a combination of beginner-friendly hands-on tutorials described in a simple language. The handbook contains tutorials on basics of Python programming language to deploying your web application. Pupils under me were already ready by pre-configuring Python with their workstation. We started by introducing our selves. We took some time browsing the website of undersea cables. One of the amusing questions I got is, “Isn’t the world connected with Satellites?”. My team was comfortable with the Python, so we quickly skimmed to the part where I introduced them to the basics of web and then Django. I noticed mentors were progressing according to the convenience of participants. Nice amount of time was invested in discussing raised queries. During illumination, we heard a loud call for the lunch. A decent meal was served to all the members. I networked with other mentors and participants during the break. Post-lunch we created a blog app and configured it with our existing project. Overview of Django models topped with the concept of ORM became the arduous task to explain. With the time as a constraint, I focused on admin panel and taught girls about deploying their websites to PythonAnywhere. I am happy with the hard work done by my team. They were able to demonstrate what they did to the world. I was more joyful than them for such achievement.

The closing ceremony turned amusing event for us. 10 copies of Two scoops of Django book were distributed to the participants chosen by random draw strategy. I solemnly thank the authors of the book and Pothi.com for gifting such a nice reference. Participants shared their experiences of the day. Mentors pinpointed helpful resources to look after. They insisted girls would not stop at this point and open their wings by developing websites using skills they learned. T-shirts, stickers and badges were distributed as event swags.

You can find the list of all Django Girls chapters here. Djangonauts are encouraged to become a mentor for Django Girls events in your town. If you can’t finding any in your town, I encourage you to take the responsibility and organize one for your town. If you are already a part of Django Girls community, Why are you not sharing your experience with others?

Proofreaders: Kushal Das, Dhavan Vaidya, Isaul Vargas

by Jaysinh Shukla at November 14, 2017 02:31 AM

October 22, 2017

Samikshan Bairagya

Notes on Tensorflow and how it was used in ADTLib

Its been almost 2 years since I have been an amateur drummer (which apparently is also the time since my last blog) and I have always felt that it would be great to have something that can provide me with drum transcriptions from a given music source. I researched a bit and came across a library that provides an executable as well as an API that can be used to generate drum tabs (consisting of hi-hats, snare and the kick drum) from a music source. Its called ADTLib. This isn’t extremely accurate when one tests it and I’m sure the library will only get better with more data sources available to train the neural networks better but this was a definitely a good place to learn a bit about neural networks and libraries like Tensorflow. This blog post is basically meant serve as my personal notes wrt how Tensorflow has been used in ADTLib.

So to start off, the ADTLib source code actually doesn’t train any neural networks. What ADTLib essentially does is feed a music file through a pre-trained neural network to give us the automatic drum transcriptions in text as well as PDF form. We will need to start off by looking at two methods and one function inside https://github.com/CarlSouthall/ADTLib/blob/master/ADTLib/utils/__init__.py

  • methods create() and implement() belonging to class SA
  • function system_restore()

In system_restore() we initialise an instance of SA and call the create() method. There are a lot of parameters that are initialised when we create the neural network graph. We’ll not go into the details of those. Instead let’s look at how Tensorflow is used inside the SA.create() method. I would recommend reading this article on getting started with Tensorflow before going ahead with the next part of this blog post.

If you’ve actually gone through that article you’d know by now that Tensorflow creates graphs that implement a series of Tensorflow operations. The operations flow through the units of the graphs called ‘tensors’ and hence the name ‘tensorflow’. Great. So, getting back to the create() method, we find that first a tf.reset_default_graph() is called. This resets the global variables of the graph and clears the default graph stack.

Next we call a method weight_bias_init(). As the name suggests this method initialises the weights and biases for our model. In a neural network, weights and biases are parameters which can be trained so that the neural network outputs values that are closest to the target output. We can use ‘variables’ to initialise these trainable parameters in Tensorflow. Take these examples from the weight_bias_init() code:

  • self.biases = tf.Variable(tf.zeros([self.n_classes]))
  • self.weights =tf.Variable(tf.random_normal([self.n_hidden[(len(self.n_hidden)-1)]*2, self.n_classes]))

self.biases is set to variable with an initial value which is defined by the tensor returned by tf.zeros() (returns a tensor with dimension=2 where all elements are set to 0. self.n_classes is set to 2 in the ADTLib code). self.weights is initialised to a variable defined by the tensor returned by tf.random_normal(). tf.random_normal() returns a tensor of the mentioned shape with random normal values (type float32) with a mean of 0.0 and a standard deviation of 1.0. These weights and biases are trained based on the type of optimisation function later on. In ADTLib no training is actually done wrt to these weights and biases. These parameters are loaded from pre-trained neural networks as I’ve mentioned before. However we need these tensors defined in order to be able to implement the neural network on input music source.

Next we initialise a few ‘placeholders’ and ‘constants’. Placeholders and constants are again ‘tensors’ and resemble units of the graph. Example lines from the code:

  • self.biases = tf.Variable(tf.zeros([self.n_classes]))
  • self.seq=tf.constant(self.truncated,shape=[1])

Placeholders are used when a graph needs to be provided external inputs. They can be provided values later on. In the above example we define a placeholder that is supposed to hold ‘float32’ values in an array of dimension [1, 1000, 1024]. (Don’t worry about how I arrived at these dimensions. Basically if you check the init() method for class SA, you’ll understand that ‘self.batch’ is structure of dimension [1000, 1024]). Constants as the name suggests hold constant values. In the above example, self.truncated is initialised to 1000. ‘shape’ is an optional paramaters that specifies the dimension of the resulting tensor. Here the dimension is set to [1].

Now, ADTLib uses a special type of recurrent neural networks called bidirectional recurrent neural networks (BRNN). Here neurons or cells of a regular RNN are split into two directions, one for positive time direction(forward states), and another for negative time direction(backward states). Inside the create() method, we come across the following code:
self.outputs, self.states= tf.nn.bidirectional_dynamic_rnn(self.fw_cell,
self.bw_cell, self.x_ph,sequence_length=self.seq,dtype=tf.float32)

This creates the BRNN with the two types of cells provided as parameters, the input training data, the length of the sequence (which is 1000 in this case) and the data type. self.outputs is a tuple (output_fw, output_bw) containing the forward and the backward RNN output Tensor.

The forward and backward outputs are concatenated and fed to the second layer of the BRNN as follows:

self.first_out=tf.concat((self.outputs[0],self.outputs[1]),2)
self.outputs2, self.states2= tf.nn.bidirectional_dynamic_rnn(self.fw_cell2,
self.bw_cell2,self.first_out,sequence_length=self.seq2,dtype=tf.float32)

We now have the graph that defines how the BRNN should behave. These next few lines of code in the create() method deals with something called as soft-attention. This answer on stack overflow provides an easy introduction to this concept. Check it out if you want to but I’ll not go much into those details. But what happens essentially is that the forward and backward output cells from the second layer are again concatenated and then furthur processed to ultimately get a self.presoft value which resembles (W*x+b) as seen below.

self.zero_pad_second_out=tf.pad(tf.squeeze(self.second_out),[[self.attention_number,self.attention_number],[0,0]])
self.attention_m=[tf.tanh(tf.matmul(tf.concat((self.zero_pad_second_out[j:j+self.batch_size],tf.squeeze(self.first_out)),1),self.attention_weights[j])) for j in range((self.attention_number*2)+1)]
self.attention_s=tf.nn.softmax(tf.stack([tf.matmul(self.attention_m[i],self.sm_attention_weights[i]) for i in range(self.attention_number*2+1)]),0)
self.attention_z=tf.reduce_sum([self.attention_s[i]*self.zero_pad_second_out[i:self.batch_size+i] for i in range(self.attention_number*2+1)],0)
self.presoft=tf.matmul(self.attention_z,self.weights)+self.biases

Next we come across self.pred=tf.nn.softmax(self.presoft). This basically decides what activation function to use for the output layer. In this case softmax activation function is used. IMO this is a good reference for different kind of activation functions.

We now move onto the SA.implement() method. This function takes an input audio data, processed by madmom to create a spectrogram. Next self.saver.restore(sess, self.save_location+'/'+self.filename) loads the respective parameters from pre-trained neural network files for respective sounds (hi-hat/snare/kick). These Tensorflow save files can be found under ADTLib/files. Once the parameters are loaded, the Tensorflow graph is executed using sess.run() as following:
self.test_out.append(sess.run(self.pred, feed_dict={self.x_ph: np.expand_dims(self.batch,0),self.dropout_ph:1}))

When this function is executed we get the test results and further processing is done (this process is called peak-picking) to get the onsets data for the different percussive components.

I guess that’s it. There are a lot of details that I have omitted from this blog, mostly because it would make the blog way longer. I’d like to thank the author of ADTLib (Carl Southall) who cleared some icky doubts I had wrt to the ADTLib code. There is also a web version of ADTLib that has been developed with an aim to gather more data to train the networks better. So contribute data if you can!


by Samikshan Bairagya at October 22, 2017 03:16 AM

October 18, 2017

Subho

Understanding RapidJson – Part 2

In my previous blog on Rapidjson, alot of people asked for a detailed example in the comments so here is part 2 of Understanding Rapidjson with a slightly detailed example. I hope this will help you all.

We will straightaway improve on my last example in the previous blog and modify the changeDom function to add more complex object to the DOM tree.

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator());
subdoc.SetObject(); // starting the object
Value arr(kArrayType); // the innermost array
 Value::AllocatorType allocator;
for (unsigned i = 0; i < 10; i++)
arr.PushBack(i, allocator); // adding values to array , this function expects an allocator object
// adding the array to its parent object and so on , finally adding it to the parent doc object
subdoc.AddMember("New", Value(kObjectType).Move().AddMember("Numbers",arr, allocator), subdoc.GetAllocator());
d.AddMember("testing",subdoc, d.GetAllocator()); // finally adding the sub document to the main doc object
d["f"] = true;
d["t"].SetBool(false);
}

Here we are creating Value objects of type kArrayType and kObjectType and appending them to their parent node from innermost to outermost.

Before Manupulation
{
 "hello": "world",
 "t": true,
 "f": false,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
 0,
 1,
 2,
 3
 ]
}
After Manupulation
{
 "hello": "c++",
 "t": false,
 "f": true,
 "n": null,
 "i": 123,
 "pi": 3.1416,
 "a": [
    0,
    1,
    2,
    3
  ],
 "testing": {
     "New": {
         "Numbers": [
             0,
             1,
             2,
             3,
             4,
             5,
             6,
             7,
             8,
             9
         ]
     }
 }
}

The above changeDom can also be written using prettywritter object as follows:

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
node.SetString("c++");
Document subdoc(&d.GetAllocator()); // sub-document
// old school write the json element by element
StringBuffer s;
PrettyWriter<StringBuffer> writer(s);
writer.StartObject();
writer.String("New");
writer.StartObject();
writer.String("Numbers");
writer.StartArray();
for (unsigned i = 0; i < 10; i++)
writer.Uint(i);
writer.EndArray();
writer.EndObject();
writer.EndObject();
subdoc.Parse(s.GetString()); // Parsing the string written to buffer to form a sub DOM

d.AddMember("testing",subdoc, d.GetAllocator()); // Attaching the Sub DOM to the Main DOM object
d["f"] = true;
d["t"].SetBool(false);
}

Happy Coding! Cheers.

More reads:
https://stackoverflow.com/questions/32896695/rapidjson-add-external-sub-document-to-document


by subho at October 18, 2017 02:38 PM

September 28, 2017

Dhriti Shikhar

September Golang Bangalore Meetup

The September Golang Bangalore Meetup was conducted on Saturday, September 16, 2017 at DoSelect, Bengaluru. Around 25-30 people attended the meetup.

The meetup started at 10:15 with the first talk by Baiju Muthukadan who works at Red Hat India Pvt. Ltd., Bengaluru. He talked about “Testing techniques in Golang”.

IMG_20170916_102219

Karthikeyan Annamalai  gave a lightning talk about “Building microservice with gRPC”. The slides related to his talk can be found here.

karthik

Dinesh Kumar gave an awesome talk about “Gotcha’s in Golang”.  The slides related to his talk could be found here and the code explained during the demo is here.

IMG_20170916_115114.jpg

The last lightning talk of the meetup was by Akshat who works at Go-Jek. Akshat talked about “Building an asynchronous http client with retries and hystrix in golang“.

IMG_20170916_125031.jpg

I thank Sanket Saurav, Mohommad Rafy for helping us to organize the September Golang Bangalore Meetup by providing venue and food at DoSelect. Also, I thank Sudipta Sen for helping us out with the meetup preparation.


by Dhriti Shikhar at September 28, 2017 08:47 AM

Serverless Architecture

I attended Serverless Architecture Meetup  organized by Hasgeek on Saturday, September 23 which got me curious to learn more about Serverless architecture. The meetup was conducted at Walmart Labs, Bengaluru.

The first talk was by Akhilesh Singh who is a Senior Technical Consultant at Google. Akhilesh Talked about:

  • What is Serverless Architecture?
  • Evolution of serverless
  • Serverless vs IaaS model

Akhilesh was very proficient in not only explaining what is serverless architecture but also putting across his point of view about this trend.

The second talk was by Ganesh Samarthyam, Co-founder of CodeOps Technologies and Srushith Repakula, Software Engineer at CodeOps Technologies. Ganesh talked about how serverless architecture is applied in practice. Srushith showed a demo application for auto-retweeting written in Python which uses Apache OpenWhisk.

The most interesting part of the meetup was the Panel Discussion. The panel members were:

  • Akhilesh Singh
  • Ganesh Samarthyam
  • Joydeep Sen Sarma (Co-founder & CTO, Qubole)
  • Rishu Mehrotra (SRE Manager, LinkedIn)

During the meetup, a lot of questions were raised around:

  • Security in serverless architecture
  • How resources are utilized
  • Role of devOps in serverless architecture, etc

These are my notes on serverless architecture:

Servers

Conventionally, servers:

  • have fixed resources
  • are supposed to run all the time
  • are managed by system administrators

 

Problem with Servers

  1. When traffic increases, servers were not able to handle enormous amount of requests and would crash.

 

Paas

  1. To handle the above problem, Paas came into existence which offered scaling.
  2. This can be considered as the first iteration of Serverless
  3. You think about servers but you dont have to manage them

 

What does Server-less mean?

The word “server-less” doesnot mean -> no servers at all. It simply means elimination of ‘managing’ of servers.

 

What is Serverless?

  1. Serverless computing is a cloud computing execution model in which the cloud
    • manages allocation of machine resources
    • bills based on actual amount of resources consumed by application (rather than billing on pre-purchased units of capacity)

 

 

What problem does Serverless architecture solve?

  1. We build our applications around VM. We have a VM for each:
    • database
    • web
    • application
  2.  If  VM fails, a layer of our application fails
  3. Even if we break down into smaller containers or microservices, when these microservices or infrastructure fail, our application fails.

 

Advantages of Serverless architecture

1. Focus on application development rather than managing servers.

2. Serverless provisions are completely managed by providers using automated systems which eliminates the need of system administrators.

 

Stateless Nature of Serverless architecture

1. Serverless architectures are event driven.

2. This means for each event or request to server, a state is created.
After the request is served, the state is destroyed.

 

Problem with Statelessness

  1. There are different usecases for Stateless architecture. So, your application architecture needs to be redesigned according to the usecase.
  2. States can be stored across multiple requests with:
    • in memory db like redis
    • simple object storage
  3. This is slower than storing state in:
    • cache
    • RAM

 

Function As A Service (FaaS)

  1. A way to implement Serverless architecture
  2. What is a function?
    • Function is a small program that does one small thing
  3. Short lived functions are invoked upon each request and provider bills client for running each individual function.

 

Popular Faas Services

  1. AWS Lambda
  2. Google Cloud Functions
  3. IBM BlueMix OpenWhisk
  4. hook.io

 

FaaS vs Managed Servers

1. Similarity:
You dont have to manage the servers

2. Fundamental Difference:
In Faas, you dont need to manage server applications as well

 

Advantages of Faas

  1. Two FaaS functions written in different languages can interact with each other easily.
  2. Multiple functions can be connected and chained together to implement reusable components.

 

FaaS vs PaaS

Consider an e-commerce website. On a normal day, the traffic is average. But during holidays, we could expect a sudden surge in the traffic. In those cases, the server will not be able to serve so many requests and eventually crash. But this can be solved by scaling the server resources.

In Paas, scaling is provided. But you need to estimate how much resources you would need and then provision them accordingly. The problem with this is that you might over or under estimate. If you over estimate, then even on normal days, you would pay for unused resources. If you under estimate, then your server will crash when traffic increases.

In FaaS, the biggest USP is ‘automatic scaling’. You dont have to think about scaling. Automatic horizontal scaling is managed by the provider and is completely elastic.

 

Backend As A Service (BaaS)

  1. It integrates into FaaS architecture
  2. BaaS provides entire application component as a service like:
    • DB storage
    • push notifications
    • analytics

 

FaaS Cold Start Problem

  1. Cold starting a function in serverless platform takes a considerable amount of time to load.
  2. This is bad in the cases where certain functions are accessed infrequently.
  3. This can be overcomed by a process called ‘warming’ where in functions are invoked periodically.

 

FaaS Time Limit Problem

  1. FaaS Functions have time limit within which they have to run
  2. If they exceed it, they will be automatically killed.
  3. So, application should be redesigned to divide a long-lived function into multiple co-ordinated functions

 

Vendor lock-in

  1. This is the major disadvantage of FaaS..
  2. When you move from one provider to another, you will need to change your code accordingly.

 

Serverless Architecture

  1. Serverless goes a step beyond where you dont even have to think about capacity in advance.
  2.  You would generally run a monolith application on a PaaS.
  3. Serverless lets you break your application into small self contained programs (functions).
    • Example:
      • Each API end point can be a seperate function
  4. From operations perpective, the reason you would break down your app into functions is to scale and deploy seperately.
    • Example:
      • If one of your API endpoint has 90% of traffic, then that one bit of code/ function can be distributed and scaled much easier than your entire application.

by Dhriti Shikhar at September 28, 2017 08:16 AM

September 20, 2017

Sanjiban Bairagya

Randa 2017 Report – Marble Maps

Just came back home yesterday from Randa Meetings 2017. This year, even though my major motive for the sprint was to use Qt 5.8’s Qt Speech module instead of custom Java for text-to-speech during navigation, that could not be achieved because of a bug which made the routes not appear in the app in the first place. And this bug is reproducible both by using latest code, and old-enough code, and is even there in the prod app in the Google Play Store itself. So, although most of my time had gone in deep-diving on the issue, unfortunately I was not able to find the root-cause to it eventually. I will need to pick up on that in the coming weeks again when I get time, to get it fixed. Apart from that, I managed to fix a few more bugs, and work on adding a splash screen to the Android app:

  • Made the bookmarks dialog responsive to touch everywhere. When we tapped on the Search field in the app, if nothing was written in that field then the bookmarks dialog showed up there. If the number of bookmarks was <= 6, then the far-lower part of the dialog did nothing on being tapped. Made that portion choose the last item from the bookmarks list in that case.
  • There was an issue where pressing the back-button always led to closing the app instead of handling that event depending on the various scenarios. On looking, it was found that the root-cause for the issue was lying in the code of Kirigami itself. Reported the same to Marco Martin, and fixed the issue in the Marble app via a workaround by using Kirigami.AbstractApplicationWindow instead of Kirigami.ApplicationWindow in the Qml. Also reported the issue that svg icons were not showing up in the app on Plasma Mobile because of svg icons not being enabled there.

  • Worked on adding a splash screen to the Marble Maps app. The highest resolution of the Marble logo in png that we had at the moment was 192 x 192, which looked a bit blurry, as if it had been scaled up, so I just displayed it as 100×100 for the splash screen to make it look sharp. Later, Torsten created a 256×256 version of the png from an svg, which got rid of the blur in the previous png. So I added that in the splash screen later, and the icon there looks much bigger, sharper, and non-blurry now.
    wordpress

Apart from work, there was snow this year, did some interesting acro-yoga with Frederik, did a 3-hour hike to walk across the longest pedestrian suspension bridge in the world, went on a tour to Zermatt to catch a glimpse of the Matterhorn, and ended it all with having a delicious barbecue with Emmanuel and Tomaz somewhere up in the mountains on the final evening. Thanks to Mario for organizing the meetings and the cheese and chocolates, without which no work could have been done.

banner-fundraising2017


by sanjibanbairagya at September 20, 2017 02:41 AM

August 22, 2017

Anwesha Das

The mistakes I did in my blog posts

Today we will be discussing the mistakes I did with my blog posts.
I started (seriously) writing blogs a year back. A few of my posts got a pretty nice response. The praise put me in seventh heaven. I thought I was a fairly good blogger.But after almost a year of writing, one day I chanced upon one of my older posts and reading it sent me crashing down to earth.

There was huge list of mistakes I made

The post was a perfect example of TLDR. I previously used to judge a post based on quantity. The larger the number of words, the better! (Typical lawyer mentality!)

The title and the lead paragraph were vague.

The sentences were long (far too long).

There were plenty grammatical mistakes.

I lost the flow of thought, broke the logical chain in many places.

The measures I took to solve my problem

I was upset. I stopped writing for a month or so.
After the depressed, dispirited phase was over, I got back up, dusted myself off and tried to find out ways to make be a better writer.

Talks, books, blogs:

I searched for talks, writings, books on “how to write good blog posts” and started reading, and watching videos. I tried to follow those while writing my posts.

Earlier I used to take a lot of time (a week) to write each post. I used to flit from sentence to new sentence. I used to do that so I do not forget the latest idea or next thought that popped into my head.
But that caused two major problems:

First, the long writing time also meant long breaks. The interval broke my chain of thought anyway. I had to start again from the beginning. That resulted in confusing views and non-related sentences.

Secondly, it also caused the huge length of the posts.

Now I dedicate limited time, a few hours, for each post, depending on the idea.
And I strictly adhere to those hours. I use Tomato Timer to keep a check on the time. During that time I do not go to my web browser, check my phone, do any household activity and of course, ignore my husband completely.
But one thing I am not being able to avoid is, “Mamma no working. Let's play” situation. :)
I focus on the sentence I am writing. I do not jump between sentences. I’ve made peace with the fear of losing one thought and I do not disturb the one I am working on. This keeps my ideas clear.

To finish my work within the stipulated time
- I write during quieter hours, especially in the morning, - I plan what to write the day before, - am caffeinated while writing

Sometimes I can not finish it in one go. Then before starting the next day I read what I wrote previously, aloud.

Revision:

Previously after I finished writing, I used to correct only the red underlines. Now I take time and follow four steps before publishing a post:

  • correct the underlined places,
  • check grammar,
  • I read the post aloud at least twice. This helps me to hear my own words and correct my own mistakes.
  • I have some friends to check my post before publishing. An extra human eye to correct errors.

Respect the readers

This single piece of advice has changed my posts for better.
Respect the reader.
Don’t give them any false hopes or expectations.

With that in mind, I have altered the following two things in my blog:

Vague titles

I always thought out of the box, and figured that sarcastic titles would showcase my intelligence. A off hand, humourous title is good. How utterly wrong I was.

People search by asking relevant question on the topic.
Like for hardware () project with esp8266 using micropython people may search with
- “esp8266 projects” - “projects with micropython” - “fun hardware projects” etc. But no one will search with “mybunny uncle” (it might remind you of your kindly uncle, but definitely not a hardware project in any sense of the term).

People find your blogs by RSS feed or searching in any search engine.
So be as direct as possible. Give a title that describes core of the content. In the words of Cory Doctorow write your headlines as if you are a Wired service writer.

Vague Lead paragraph

Lead paragraph; the opening paragraph of your post must be explanatory of what follows. Many times, the lead paragraph is the part of the search result.

Avoid conjunctions and past participles

I attempt not to use any conjunction, connecting clauses or past participle tense. These make a sentence complicated to read.

Use simple words

I use simple, easy words in contrast to hard, heavy and huge words. It was so difficult to make the lawyer (inside me) understand that - “simple is better than complicated”.

The one thing which is still difficult for me is - to let go. To accept the fact all of my posts will not be great/good.
There will be faults in them, which is fine.
Instead of putting one’s effort to make a single piece better, I’d move on and work on other topics.

by Anwesha Das at August 22, 2017 03:18 AM

August 17, 2017

Anwesha Das

DreamHost fighting to protect the fundamental rights of its users

Habeas data, my data my right, is the ethos of the right to be a free and fulfilling individual. It offers the individual to be him/herself without being monitored.

In The United States, there are several salvos to protect and further the concept.

The First Amendment

The First Amendment (Amendment I) to the United States Constitution establishes the

  • Freedom of speech,
  • Freedom of the press,
  • Freedom exercise of religion,
  • Freedom to assemble peaceably
The Fourth Amendment

The Fourth Amendment(Amendment IV) to the United States Constitution

  • prohibits unreasonable searches and seizures;
  • establishes The right to privacy is protected by the US Constitution, though the exact word privacy has not been used. But if we have a close look to the Amendment to the Bill of Rights, it bars Government to issue a general warrant.
The Privacy Protection Act, 1980

The Act protects press, journalists, media house, newsroom from the search conducted by the government office bearers. It mandates that it shall be unlawful for a government employee to search for or seize “work product” or “documentary materials” that are possessed by a person “in connection with a purpose to disseminate to the public a newspaper, book, broadcast, or other similar form of public communication”, in connection with the investigation or prosecution of a criminal offense, [42 U.S.C. §§ 2000aa (a), (b) (1996)]. An order, a subpoena is necessary for accessing the information, documents.

But the Government for the time again have violated, disregarded these mandates and stepped outside their periphery in the name of security of the state.

The present situation with DreamHost

DreamHost is A Los Angeles based company(private). It provides the following services, Web hosting service, Cloud computing service, Cloud storage service, Domain name registrar. The company since past few months is fighting a legal battle to protect their and one of their customer’s, disruptj20.org fundamental right.

What is disruptj20.org?

The company hosts disruptj20.org in the web. It is a website which organized, encouraged willing individuals to participate against the present US Government. Wikipedia says - “DisruptJ20 (also Disrupt J20), a Washington, D.C.-based political organization founded in July 2016 and publicly launched on November 11 of the same year, stated its initial aim as protesting and disrupting events of the presidential inauguration of the 45th U.S.”

The Search Warrant

There was a Search Warrant issued against DreamHost. It requires them to
disclose, give away “the information associated with www.disruptj20.org that is stored at the premises owned, maintained, controlled, or operated by DreamHost,” [ATTACHMENT A].

The particular list of information to be disclosed and information to be seized by the government can be seen at ATTACHMENT B.

How it affects third parties (other than www.disruptj20.org)?

It demands to reveal to the government of “all files” related to the website, which includes the HTTP logs for the visitors, - means

  • the time and date of the visit,
  • the IP address for the visitor,
  • the website pages viewed by the visitor (through their IP address),
  • the detailed description of the software running in the visitor’s computer.
  • details of emails by the third party to the www.disruptj20.org.

Responding to it the company challenged the Department of Justice (on the warrant). They made an attempt to quash the demand of seizure and disclosure of the information by due legal process and reason.

Motion to show cause

In a usual course of action, the DOJ would respond to the inquiries of DreamHost. But here instead of answering to their inquiries, DOJ chose to file a motion to show cause in the Washington, D.C. Superior Court. DOJ asked for an order to compel them to produce the records,

The Opposition

The Opposition for the denial of the above mentioned motion filed by DreamHost filed an Opposition for the denial of the above mentioned motion. The “Argument” part shows/claims/demonstrates

  • How the Search Warrant Violates the Fourth Amendment and the Privacy Protection Act.
  • The Search Warrant requires the details of the visitors to the website.This severely endangers their freedom of speech (The First Amendment). As described by DreamHost on their blog.

“This motion is our latest salvo in what has become a months-long battle to protect the identities of thousands of unwitting internet users.”

Electronic Frontier Foundation has led their support, help to DreamHost, though they are not representing them in the court. The matter will be heard on August 18 in Washington, D.C.

There are different kinds of securities. Security for state power is a kind that is constantly protected. In contrast to the security for the population which is constantly denied, negated, curbed and restrained. By looking at the series of events, the documentary records of this particular incident raises a doubt -

  • If the Government action is motivated by security or not?

The only security in this case is probably is being considered is security to stay in power, noticing the nature, subject of the website. Now it is the high time that people should stand to save individual’s, commoner’s right to have private space, opinion and protest. Kudous to DreamHost to protect the primary fundamental right of and individual - Privacy.

by Anwesha Das at August 17, 2017 07:13 PM

August 15, 2017

Mario Jason Braganza

Book Review – i want 2 do project. tell me wat 2 do

Click me to buy!


TL;DR? It’s awesome. Buy it right now.

I was looking to dip my toes into some sort of structured help with the summer training and open source in general, because while I knew what I wanted, I just didn’t know how to go about it.

And then I realised that one of our mentors had actually gone and written a whole book on the how to.
So, I bought the paperback.
The binding is really good, the paper really nice (unlike other tech books I’ve read) and the words large enough to read.
I expect to get a lot of use, out of the book.

And lot of use is right.
While it’s a slim volume and a pretty quick read, the book is pretty dense when it comes to the wisdom it imparts.

The book has a simple (yet substantial to execute) premise.
You’ve just tipped your toe into programming, or you’ve learnt a new language, or you’ve probably written a few programs or maybe you’re just brand new.
You want to explore the vast thrilling world that is Open Source Software.
What now?

“i want 2 do project. tell me wat 2 do.” answers the “what now” in painstaking detail.

From communication (Mailing List Guidelines) to the importance of focus (Attention to Detail) to working with mentors (the Project chapters) to the tools (Methodology & tools) to the importance of sharpening the saw (Reading …) and finally the importance of your environment (Sustenance), the book covers the entire gamut that a student or a novice programmer with open source would go through.

Shakthi writes like he speaks; pithily, concisely with the weight of his experience behind his words.

The book is chockfull of quotes (from the Lady Lovelace to Menaechmus to Taleb) that lend heft to the chapters.
The references at the end of each chapter will probably keep me busy for the next few months.

The book’ll save you enormous amounts of time and heartache, in your journey, were you to heed its advice.
It’s that good.

by Mario Jason Braganza at August 15, 2017 01:08 PM