Planet dgplug

March 24, 2015

Kushal Das

Tunir, a simple CI with less pain

One of my job requirement is to keep testing the latest Fedora cloud images. We have a list of tests from Fedora QA team. But the biggest problem is that I don’t like doing these manually. I was looking for a way to run these automatically. We can do this by the normal CI systems, but there are two problems in that.

  • Most CI systems cannot handle cloud images, unless there is a real cloud running somewhere.
  • Maintaining the CI system & the cloud is a pain in my standard.

Tunir came out as a solution to these problems. It is a simple system, which can run predefined set of commands in a fresh cloud instance, or in a remote system. Btw, did I mention that you don’t need a cloud to run these cloud instances in your local system? This is possible thanks to the code from Mike Ruckman.

Each job in Tunir requires two files, jobname.json and jobname.txt. The json file contains the details of the Cloud image (if any), or the remote system details, ram required for the vm etc. The .txt file contains the shell commands to run in the system. For now it has two unique commands for Tunir. You can write @@ in front of any command to mark that this command will return non zero exit code. We also have a SLEEP NUMBER_OF_SECONDS option, we use it when we reboot the system, and want Tunir to wait before executing the next command.

Tunir has a stateless mode, I use that all the time :) In stateless mode, it will not save the results in any database. It will directly print the result in the terminal.

$ tunir --job fedora --stateless

Tunir uses redis to store some configuration information, like available ports. Remember to execute to fill the configuration with available ports.

You can install Tunir using pip, a review request is also up for Fedora. If you are on Fedora 21, you can just test with my package.

I am currently using unittest for the Cloud testcases, they are available at my github. You can use fedora.json and fedora.txt from the same repo to execute the tests. Example of tests running inside Tunir is below (I am using this in the Fedora Cloud tests).

curl -O
tar -xzvf tunirtests.tar.gz
python -m unittest tunirtests.cloudtests
sudo systemctl stop crond.service
@@ sudo systemctl disable crond.service
@@ sudo reboot
sudo python -m unittest tunirtests.cloudservice.TestServiceManipulation
@@ sudo reboot
sudo python -m unittest tunirtests.cloudservice.TestServiceAfter

UPDATE: Adding the output from Tunir for test mentioned above.

sudo ./tunir --job fedora --stateless
[sudo] password for kdas: 
Got port: 2229
cleaning and creating dirs...
Creating meta-data...
downloading new image...
Local downloads will be stored in /tmp/tmpZrnJsA.
Downloading file:///home/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 (158443520 bytes)
Succeeded at downloading Fedora-Cloud-Base-20141203-21.x86_64.qcow2
download: /boot/vmlinuz-3.17.4-301.fc21.x86_64 -> ./vmlinuz-3.17.4-301.fc21.x86_64
download: /boot/initramfs-3.17.4-301.fc21.x86_64.img -> ./initramfs-3.17.4-301.fc21.x86_64.img
/usr/bin/qemu-kvm -m 2048 -drive file=/tmp/tmpZrnJsA/Fedora-Cloud-Base-20141203-21.x86_64.qcow2,if=virtio -drive file=/tmp/tmpZrnJsA/seed.img,if=virtio -redir tcp:2229::22 -kernel /tmp/tmpZrnJsA/vmlinuz-3.17.4-301.fc21.x86_64 -initrd /tmp/tmpZrnJsA/initramfs-3.17.4-301.fc21.x86_64.img -append root=/dev/vda1 ro ds=nocloud-net -nographic
Successfully booted your local cloud image!
PID: 11880
Starting a stateless job.
Executing command: curl -O
Executing command: tar -xzvf tunirtests.tar.gz
Executing command: python -m unittest tunirtests.cloudtests
Executing command: sudo systemctl stop crond.service
Executing command: @@ sudo systemctl disable crond.service
Executing command: @@ sudo reboot
Sleeping for 30.
Executing command: sudo python -m unittest tunirtests.cloudservice.TestServiceManipulation
Executing command: @@ sudo reboot
Sleeping for 30.
Executing command: sudo python -m unittest tunirtests.cloudservice.TestServiceAfter

Job status: True

command: curl -O
status: True

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8019  100  8019    0     0   4222      0  0:00:01  0:00:01 --:--:--  4224

command: tar -xzvf tunirtests.tar.gz
status: True


command: python -m unittest tunirtests.cloudtests
status: True

Ran 4 tests in 0.036s

OK (skipped=1, unexpected successes=2)

command: sudo systemctl stop crond.service
status: True

command: @@ sudo systemctl disable crond.service
status: True

Removed symlink /etc/systemd/system/

command: @@ sudo reboot
status: True

command: sudo python -m unittest tunirtests.cloudservice.TestServiceManipulation
status: True

Ran 1 test in 0.282s


command: sudo python -m unittest tunirtests.cloudservice.TestServiceAfter
status: True

Ran 1 test in 0.070s


by Kushal Das at March 24, 2015 06:46 AM

March 18, 2015

Arpita Roy

Yet, a lot to learn.

Going on a break is becoming a regular bad habit of mine :P  The culprit is Time.
After coming back to my as usual boring life of college, i was busy with my internals. Went fine.
Coming back to my ” learning ” part.. i have so so much to study yet.
Speaking about a little things i came across in C, Arrays, Numbers Systems, brief description on Pointers and yes, FUNCTIONS – felt like a master as i knew it’s details ( thanks to python :) ) There is a lot to learn about functions too.
Trying to work on a few codes that are not getting into my head.
A day passes and everyday at the end i suppose myself to have grasped a few essential things.

P.S – All that i could discuss for today. Need to fast up my pace in the world of learning :)

by Arpita Roy at March 18, 2015 02:58 PM

March 09, 2015

Samikshan Bairagya

spartakus: Using sparse to have semantic checks for kernel ABI breakages

I have been working on this project I have named spartakus that deals with kernel ABI checks through semantic processing of the kernel source code. I have made the source code available on Github some time back and this does deserve a blog post.

spartakus is a tool that can be used to generate checksums for exported kernel symbols through semantic processing of the source code using sparse. These checksums would constitute the basis for kernel ABI checks, with changes in checksums meaning a change in the kABI.

spartakus (which is currently a WIP) is forked from sparse and has been modified to fit the requirements of semantic processing of the kernel source for kernel ABI checks. This adds a new binary ‘check_kabi‘ upon compilation, which can be used during the linux kernel build process to generate the checksums for all the exported symbols. These checksums are stored in Module.symvers file which is generated during the build process if the variable CONFIG_MODVERSIONS is set in the .config file.

What purpose does spartakus serve?
In an earlier post I had spoken a bit on exported symbols constituting the kernel ABI and how its stability can be kept track of through CRC checksums. genksyms has been the tool that had been doing this job of generating the checksums for exported symbols that constitute the kernel ABI so far, but it has several problems/limitations that make the job of developers difficult wrt maintaining the stability of the kernel ABI. Some of these limitations as determined by me are mentioned below:

1.genksyms generates different checksums for structs/unions for declarations which are semantically similar. For example, for the following 2 declarations for the struct ‘list_head':

struct list_head {
    struct list_head *next, *prev;

struct list_head {
    struct list_head *next;
    struct list_head *prev;

both declarations are essentially the same and should not result in a change in kABI wrt ‘list_head’. Sparse treats these 2 declarations as semantically the same and different checksums are not generated.

2. For variable declarations with just unsigned/signed specification with no type specified, sparse considers the type as int by default.

‘unsigned foo’ is converted to ‘unsigned int foo’ and then processed to get the corresponding checksum. On the other hand, genksyms would generate different checksums for 2 different definitions which are semantically the same.

sparse is licensed under the MIT license which is GPL compatible. The added files as mentioned above are under a GPLv2 license since I have used code from genksyms which is licensed under GPLv2.

Source Code
Development on spartakus is in progress and the corresponding source code is hosted on Github.

Lastly, I know there will be at least who will wonder why the name ‘spartakus’. Well I do not totally remember, but it had something to do with sparse and Spartacus. Does sound cool though?

by Samikshan Bairagya at March 09, 2015 01:55 PM

March 08, 2015

Shakthi Kannan

Installation of NixOS 14.12

Boot from LiveCD ( nixos-graphical- ), with 40 GB virtual disk, and login as root (no password required).

nixos login: root


Start the KDE environment using the following command:

[root@nixos:~]# start display-manager

You can then add the English Dvorak layout (optional) by selecting ‘System Settings’ -> ‘Input Devices’ -> ‘Keyboard settings’ -> ‘Layouts’ -> ‘Configure layouts’ -> ‘Add’ and use the label (dvo) for the new layout. Check that networking works as shown below:

[root@nixos:~]# ifconfig

[root@nixos:~]# ping -c3

You can now partition the disk using ‘fdisk /dev/sda’ and create two partitions (39 GB /dev/sda1 and swap on /dev/sda2). Create the filesystems, and turn on swap using the following commands:

# mkfs.ext4 -L nixos /dev/sda1

# mkswap -L swap /dev/sda2
# swapon /dev/sda2

Generate a basic system configuration file with nixos-generate-config:

# mount /dev/disk/by-label/nixos /mnt

# nixos-generate-config --root /mnt

Update /mnt/etc/nixos/configuration.nix with new packages that you need as illustrated below:

# Edit this configuration file to define what should be installed on
# your system.  Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running ‘nixos-help’).

{ config, pkgs, ... }:

  imports =
    [ # Include the results of the hardware scan.

  # Use the GRUB 2 boot loader.
  boot.loader.grub.enable = true;
  boot.loader.grub.version = 2;
  # Define on which hard drive you want to install Grub.
  boot.loader.grub.device = "/dev/sda";

  # networking.hostName = "nixos"; # Define your hostname.
  networking.hostId = "56db3cd3";
  # networking.wireless.enable = true;  # Enables wireless.

  # Select internationalisation properties.
  # i18n = {
  #   consoleFont = "lat9w-16";
  #   consoleKeyMap = "us";
  #   defaultLocale = "en_US.UTF-8";
  # };

  # List packages installed in system profile. To search by name, run:
  # $ nix-env -qaP | grep wget
  environment.systemPackages = with pkgs; [
    wget emacs24 git python gnuplot notmuch

    # Installing texlive is slow and incomplete on NixOS
    # (pkgs.texLiveAggregationFun { paths = [ pkgs.texLive pkgs.texLiveExtra pkgs.texLiveBeamer ]; })    
  # texLive tetex lmodern
  # List services that you want to enable:

  # Enable the OpenSSH daemon.
  services.openssh.enable = true;

  # Enable CUPS to print documents.
  # services.printing.enable = true;

  # Enable the X11 windowing system.
  services.xserver.enable = true;
  services.xserver.layout = "us";
  # services.xserver.xkbOptions = "eurosign:e";

  # Enable the KDE Desktop Environment.
  services.xserver.displayManager.kdm.enable = true;
  services.xserver.desktopManager.kde4.enable = true;

  # Define a user account. Don't forget to set a password with ‘passwd’.
  users.extraUsers.apollo = {
    home = "/home/apollo";
    extraGroups = [ "wheel" ];
    useDefaultShell = true;
    isNormalUser = true;
    uid = 1000;


Install NixOS to hard disk:

# nixos-install

setting root password...
Enter new UNIX password: ***
Retype new UNIX password: ***

passwd: password updated successfully
installation finished!

You can now reboot into the system:

# reboot

After you login to the console, set a password for the ‘apollo’ user. A screenshot of the desktop is shown below:

March 08, 2015 04:00 PM

March 04, 2015

Arpita Roy

This is LIFE

A warm hello to everyone.. A tight schedule kills you, isn’t it ? Yes, it kills me and this being  the reason i had to take such a ” long ” break from WordPress :(
To be very honest, i don’t actually remember whole of  what i did in the past few days but would surely like to mention the important things that i came across.
As i mentioned in my previous post about the Technical Fest being held at my college.. It was the first time for me to attend a Fest. So, was a little excited. It was fun being a part of a few events.. :)
I gave my name for a workshop that was to teach us about creating a  Web-Page. It always fascinated me to open a particular website and then with the click of your mouse, you start traveling within the page. I never imagined i would ” actually know ” how to do that.
Though not much, but learnt few ( The word Few should be seriously taken care of :P )  concepts of HTML, CSS, JQUERY
I would certainly look forward to dig more into them ( CSS and JQUERY )
Thanks to my seniors who pulled out  time for us.
It was a workshop for two days.  i did my best to understand everything that was being explained.
Next, talking about my life – ahhh, it tortures me :'( Being at home for a few days CANNOT fix the pain of the total torture life you spend at college ( Trust me on this )
Even on a holiday, you are not free.. You get loads of assignments, lab projects, and then warnings – ” College reopens and class test will be waiting for you ” ( This is again why i so so much hate college )
I barely care :P i am busy in my world which consists of lots and lots of learning and i don’t feel like shifting my focus.
Along with Kushal Da’s book, i got an address to a book named byte_of_python ( Reading that too )
And the journey of my C is leading me perfect. i am happy with C ( not more happy than i am with python ;) ) There were a lot of new programs that i learnt writing in C and running them too.. and the best part is, when you write a program and it throws ” no errors ” at all.
My sparkling teeth shines when i see, ” Errors : 0 ” ( Again, this has yet to happen with python )
This part of my life makes me Happy.. All you do is learn and experiment with programs.. ( i am reading more than writing programs )
All for today. I should be summing up now. I shall write it soon again.

P.S – It is fun to be a part of a life which provides you different shades of the same color with each passing day. Don’t miss it. Enjoy every day to the fullest :)

by Arpita Roy at March 04, 2015 05:04 AM

February 26, 2015

Kushal Das

What is a hackathon or hackfest? Few more tips for proposals

According to Wikipedia, A hackathon (also known as a hack day, hackfest or codefest) is an event in which computer programmers and others involved in software development, including graphic designers, interface designers and project managers, collaborate intensively on software projects. Let us go through few points from this definition.

  • it is an event about collaboration.
  • it involves not only programmers, but designers, docs and other people.
  • it is about software projects.

We can also see that people work intensively on the projects. It can be one project, or people can work as teams on different projects. In Fedora land, the most common example of hackathon is “Fedora Activity Days” or FADs. Where a group of contributors sit together in a place and work on the project intensively. The last example is the Design FAD which we had around a month back, where the design team worked on fixing the their goals and workflows and other related things.

One should keep these things in mind while submitting a proposal for FUDCON or actually any other conference. If you want to teach about any particular technology or tool, you should put that as a workshop proposal than a hackfest or hackathon.

Then which one is a good topic for hackfest during Fudcon? Say you want to work on the speed up of the boot time of Fedora. You may want to design 5 great icons for the projects you love. If you love photography, may be you want to build a camera using a RaspberryPi and some nice Python code. Another good option is to ask for a list of bugs from the applications under Fedora apps/infrastructure/releng team and then work on fixing them during the conference.

In both hackfest or workshop proposals, there are a few points which must be present in your proposal. Things like

  • Who are the target audience for the workshop?
  • what version of Fedora must they have in their laptops?
  • which all packages should they pre-install in their computer before coming to the conference?
  • Do they need to know any particular technology or programming language or tool to take part in the workshop or hackfest?
  • Make sure that you submit proposals about the projects where you do contribute upstream.

CFP is open still 9th March, so go ahead and submit awesome proposals.

by Kushal Das at February 26, 2015 01:52 PM


Understanding RapidJson

With new technologies softwares need to evolve and adapt. My new task is to make cppagent generate output in Json (JavaScript Object Notation) format. Last week i spent sometime to try out different libraries and finally settled on using Rapidjson. Rapidjson is a json manipulation library  for c++ which is fast, simple and has compatibility with different c++ compilers in different platforms. In this post we will be looking at example codes to generate, parse and manipulate json data. For people who want to use this library i would highly recommend them to play with and understand the example codes first.

First we will write a simple program to write a sample json as below (the same simplewriter.cpp as in example) :

    "hello" : "world" ,
    "t" : true ,
    "f" : false ,
    "i" : 123 ,
    "pi" : 3.1416 ,
    "a": [

To generate a Json output you need:

  • a StringBuffer object, a buffer object to write the Json output.
  • Writer object to write Json to the buffer. Here i have used PrettyWriter object to write human-readable and properly indented json output.
  • functions StartObject/EndObject to start and close a json object parenthesis “{” and  “}” respectively.
  • functions StartArray/EndArray to start and end a json array object i.e “[” and “]“.
  • functions String(), Uint(), Bool(), Null() , Double()  are called on writer object to write string, unsigned integer, boolean, null, floating point numbers respectively.
#include "rapidjson/stringbuffer.h"
#include "rapidjson/prettywriter.h"
#include <iostream>

using namespace rapidjson;
using namespace std;

template <typename Writer>
void display(Writer& writer );

int main() {
 StringBuffer s; 
 PrettyWriter<StringBuffer> writer(s);
 cout << s.GetString() << endl;   // GetString() stringify the Json

template <typename Writer>
void display(Writer& writer){
 writer.StartObject();  // write "{"
 writer.String("hello"); // write string "hello"
 writer.Bool(true);   // write boolean value true
 writer.Null();        // write null
 writer.Uint(123);     // write unsigned integer value
 writer.Double(3.1416); // write floating point numbers
 writer.StartArray();  // write "["
 for (unsigned i = 0; i < 4; i++)
 writer.EndArray();   // End Array "]"
 writer.EndObject();  // end Object "}"

Next we will manipulate the Json document and change the value for key “Hello” to “C++” ,

To manipulate:

  • first you need to parse your json data into a Document object.
  • Next you may use a Value reference to the value of the desired node/key or you can directly access them as doc_object[‘key’] .
  • Finally you need to call the Accept method passing the Writer object to write the document to the StringBuffer object.

Below function changes the keywords for “hello” , “t”, “f” to “c++” , false , true respectively.

template <typename Document>
void changeDom(Document& d){
// any of methods shown below can be used to change the document
Value& node = d["hello"];  // using a reference
node.SetString("c++"); // call SetString() on the reference
d["f"] = true; // access directly and change
d["t"].SetBool(false); // best way

Now to put it all together:

Before Manupulation
     "hello": "world",
     "t": true,
     "f": false,
     "n": null,
     "i": 123,
     "pi": 3.1416,
     "a": [
After Manupulation
     "hello": "c++",
     "t": false,
     "f": true,
     "n": null,
     "i": 123,
     "pi": 3.1416,
     "a": [

The final code to display the above output:

#include "rapidjson/stringbuffer.h"
#include "rapidjson/prettywriter.h"
#include "rapidjson/document.h"
#include <iostream>

using namespace rapidjson;
using namespace std;

template <typename Writer> 
void display(Writer& writer);

template <typename Document>
void changeDom(Document& d);

int main() {
 StringBuffer s;
 Document d;
 PrettyWriter<StringBuffer> writer(s);
 cout << "Before Manupulation\n" << s.GetString() << endl ;
 s.Clear();   // clear the buffer to prepare for a new json document
 writer.Reset(s);  // resetting writer for a fresh json doc
 d.Accept(writer); // writing parsed document to buffer
 cout << "After Manupulation\n" << s.GetString() << endl;

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
d["f"] = true;

template <typename Writer>
void display(Writer& writer){
 for (unsigned i = 0; i < 4; i++)

by subho at February 26, 2015 12:30 PM

February 23, 2015

Chandan Kumar

February Python Pune Meetup: 21.02.2015

On 21st Feb, 2015, We organized February Python Pune Meetup at Webonise Lab, Bavdhan (Pune, India). Here is the event report of February Python Pune Meetup. We had selected 2 workshops, 1 talk-cum-workshop, 1 talk and 4 lightening talks for this meetup. More than 150 registered for the meetup but only 70 made it at the venue.

This time we started on time by 10:00 A:M, I had given a small talk on aim and objectives of Python Pune Meetup where i covered about PSF, PSSI, Python Pune Meetup, How one can contribute to Python language and Python projects and how it adds values to your career.

By 10:15 A:m, Anurag presented a talk on Writing flexible filesystems with FUSE-Python. He started with UNIX based file system, introduction to fuse-python and how to use it with directory operations and reading files. In the end he created toyfs and demoed it in the lightening talk by reading files.

Then after a short break, we again started with Django workshop by Mukesh Shukla. He continued the Django workshop from the previous meetup. He explained about Django models, creating migrations and apply migrations on an existing project by applying CRUD operations on Django models by using Django-south for Django<1.7.

Again we had a short break and by 12:30 P:M, Mayuresh started a talk-cum-workshop on Integrating Python with Firebase. He started with introduction to Firebase and how it is different from database. He created a demo chat application by using Firebase in python which performs basic read and write functionality. Here is the hosted ui for chat client.

All the workshops were quite interesting.

And by 01:20 P:M, Rishabh presented Automation using Ansible workshop. He started with the basics of ansible, modules and variables and how to create an ansible playback. To understand these things in a better way, he created an OpenStack instance and written an ansible playbook to deploy gitlab local instance on Fedora 21.

And finally here comes the lightening talks. Aditya gave a nice demo of HeatMaps using pandas and Ipython-Notebook. Harsha demoed about Easyengine a python cli tool to deploy wordpress sites easily. Hardik a college student had used selinum driver and written a python script PyAutoLogOn to login into his college WiFi automatically in every 5 minutes (as the wifi get disconnected each time and ask for log-in) and demoed it. That was a real fun in python. Lastly Anurag presented TOYFS, an implementation of FUSE-python.

Here it comes to an end of an awesome meetup with awesome feedback with a group photo.

We are soon coming up with a developer sprint for python related projects in March.

Thanks to Mukesh, Nishant, Vijay for helping me in hosting the meetup and Webonise lab for providing venue for hosting this meetup. Thanks to volunteers, attendees and speakers for making the event successful.

by Chandan Kumar at February 23, 2015 01:17 PM

February 19, 2015

Chandan Kumar

January Python Pune Meetup:31.01.2015

After successful completion of December Python Pune meetup, I allowed myself to go ahead to host another python meetup in January a.k.a January Python Pune Meetup in the new year 2015 and finally it happened on 31st Jan, 2015 at Red Hat, Pune(India) .

Here is the event report of the January Python Pune meetup. The event started a bit late by 10:15 with formal agenda of the meetup. 75 people attended this meetup (which has increased from the last meetup). About 50 % of the attendees turned up from the last meetup and they are basically final year college students and professionals. Then by 10:20 A:M, i started with a quick of recap from Python 101 workshop where i spoke about use case of Functions, Models, File handling, Exceptions and Class through hands-on. There was a lot of discussion on why we use __init__() with in a class with respect to other language.

After a short break, by 11:00 A:M Django workshop was started by Tejas Sathe. It started with the introduction of web development, how Django is writing web application using Django-admin. He created a simple web application which takes user information using Forms, stores the information in the sqlite3 database using Django Models and showed the same stored information in the other page by explaining how responses and URL mapping is done in Django.

By 01:15 P:M, with a short break, we have our two talk sessions. Jaidev spoke about Categorical Data Analysis in Python . He explained what is Categorical Data by taking a problem statement related to meetup data, its features, how to measure it, and analyzing the result out of measurement.

Finally by 01:35 P:M Rohan gave a brief introduction of oslo libraries. It is a set of python libraries containing code shared by OpenStack projects. Currently there are 27 libraries. We are planning to demonstrate the usecases of each libraries in upcoming meeting.

The event ended on time and we had open the floor for discussion and feedback and ditributed F21 workstation DVDS.

The meetup went well and feedback was good. People were complaining us to move to a new place where more than 100 people can attend and learn new stuffs. For that we are looking for sponsors who can provide the venue.

There is a plan to introduce lightening talks where attendee may show their cool python application s/he has developed or showing popular libraries usecases.

Thanks to Red Hat, Pune (India) for the venue and arrangements, volunteers, speakers and attendees for making the event successful.

Below is some happy moments from the meetup.

See you soon all in February Pune Python Meetup at Webonise Lab, Bavdhan, Pune (India) on 21st Feb, 2015 :).

by Chandan Kumar at February 19, 2015 03:55 AM

February 07, 2015

Shakthi Kannan

HDL Complexity Tool

[Published in Electronics For You (EFY) magazine, June 2014 edition.]

HCT stands for HDL Complexity Tool, where HDL stands for Hardware Description Language. HCT provides scores that represent the complexity of modules present in integrated circuit (IC) designs. It is written in Perl and released under the GPLv3 and LGPLv3 license. It employs McCabe Cyclomatic Complexity that uses the control flow graph of the program source code to determine the complexity.

There are various factors for measuring the complexity of HDL models such as size, nesting, modularity, and timing. The measured metrics can help designers in refactoring their code, and also help managers to plan project schedules, and allocate resources, accordingly. You can run the tool from the GNU/Linux terminal for Verilog, VHDL, and CDL (Computer Design Language) files or directory sources. HCT can be installed on Fedora using the command:

$ sudo yum install hct

After installation, consider the example project of uart2spi written in Verilog, which is included in this month’s EFY DVD. It implements a simple core for a UART interface, and an internal SPI bus. The uart2spi folder contains rtl/spi under the file directory in your PC: /home/guest/uart2spi/trunk/rtl/spi. Run the HCT tool on the rtl/spi Verilog sources as follows:

$ hct rtl/spi

We get the output:

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| spi_ctl.v                           20     1       1          0.1724 |
|                      spi_ctl        20     1       1                 |
| spi_core.v                          0      0       1          0.0076 |
|                      spi_core       0      0       1                 |
| spi_cfg.v                           0      0       1          0.0076 |
|                      spi_cfg        0      0       1                 |
| spi_if.v                            15     3       1          0.0994 |
|                      spi_if         15     3       1                 |

The output includes various attributes that are described below:

  • FILENAME is the file that is being parsed. The parser uses the file name extension to recognize the programming language.

  • MODULE refers to the specific module present in the file. A file can contain many modules.

  • IO refers to the input/output registers used in the module.

  • NET includes the network entities declared in the given module. For Verilog, it can be ‘wire’, ‘tri’, ‘supply0’ etc.

  • MCCABE provides the McCabe Cyclomatic Complexity of the module or file.

  • TIME refers to the time taken to process the file.

A specific metric can be excluded from the output using the “–output-exclude=LIST” option. For example, type the following command on a GNU/Linux terminal:

$ hct --output-exclude=TIME rtl/spi 

The output will be;

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME             | MODULE         | IO     | NET     | MCCABE    |
| spi_ctl.v                               20       1         1         |
|                        spi_ctl          20       1         1         |
| spi_core.v                              0        0         1         |
|                        spi_core         0        0         1         |
| spi_cfg.v                               0        0         1         |
|                        spi_cfg          0        0         1         |
| spi_if.v                                15       3         1         |
|                        spi_if           15       3         1         |

If you want only the score to be listed, you can remove the MODULE listing with the “–output-no-modules” option:

$ hct --output-no-modules rtl/spi

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME              | IO      | NET      | MCCABE      | TIME      |
| spi_ctl.v               20        1          1             0.16803   |
| spi_core.v              0         0          1             0.007434  |
| spi_cfg.v               0         0          1             0.00755   |
| spi_if.v                15        3          1             0.097721  |

The tool can be run on individual files, or recursively on subdirectories with the “-R” option. The output the entire uart2spi project sources is given below:

$ hct -R rtl

Directory: /home/guest/uart2spi/trunk/rtl/uart_core

verilog, 4 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| uart_rxfsm.v                        10     0       1          0.1379 |
|                      uart_rxfsm     10     0       1                 |
| clk_ctl.v                           0      0       1          0.0146 |
|                      clk_ctl        0      0       1                 |
| uart_core.v                         18     1       1          0.1291 |
|                      uart_core      18     1       1                 |
| uart_txfsm.v                        9      0       1          0.1129 |
|                      uart_txfsm     9      0       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/top

verilog, 1 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| top.v                               16     0       1          0.0827 |
|                      top            16     0       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| spi_ctl.v                           20     1       1          0.1645 |
|                      spi_ctl        20     1       1                 |
| spi_core.v                          0      0       1          0.0074 |
|                      spi_core       0      0       1                 |
| spi_cfg.v                           0      0       1          0.0073 |
|                      spi_cfg        0      0       1                 |
| spi_if.v                            15     3       1          0.0983 |
|                      spi_if         15     3       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/lib

verilog, 1 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| registers.v                         5      0       1          0.0382 |
|                      bit_register   5      0       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/msg_hand

verilog, 1 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| uart_msg_handler.v                  0      0       1          0.0192 |
|                      uart_m~ndler   0      0       1                 |

The default behaviour is to dump the output to the terminal. It can be redirected to a file with the “–output-file=FILE” option. You can also specify an output file format, such as “csv” with the “–output-format=FORMAT” option:

$ hct --output-file=/home/guest/project-metrics.csv --output-format=csv rtl/spi 

$ cat /home/guest/project-metrics.csv

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)

 spi_ctl.v   ,           , 20   , 1    , 1       , 110   , 48             , 0.1644
             , spi_ctl   , 20   , 1    , 1       , 68    , 6              ,
 spi_core.v  ,           , 0    , 0    , 1       , 46    , 43             , 0.0073
             , spi_core  , 0    , 0    , 1       , 4     , 1              ,
 spi_cfg.v   ,           , 0    , 0    , 1       , 46    , 43             , 0.0075
             , spi_cfg   , 0    , 0    , 1       , 4     , 1              ,
 spi_if.v    ,           , 15   , 3    , 1       , 80    , 44             , 0.0948
             , spi_if    , 15   , 3    , 1       , 38    , 2              ,

There are various yyparse options that are helpful to understand the lexical parsing of the source code. They can be invoked using the following command:

$ hct --yydebug=NN sources

The NN options and their meaning is listed below:

0x01 Lexical tokens
0x02 Information on States
0x04 Shift, reduce, accept driver actions
0x08 Dump of the parse stack
0x16 Tracing for error recovery
0x31 Complete output for debugging

HCT can also be used with VHDL, and Cyclicity CDL (Cycle Description Language) programs. For VHDL, the filenames must end with a .vhdl extension. You can rename .vhd files recursively in a directory (in Bash, for example) using the following script:

for file in `find $1 -name "*.vhd"`
  mv $file ${file/.vhd/.vhdl}

The “$1” refers to the project source directory that is passed as an argument to the script. Let us take the example of sha256 core written in VHDL, which is also included in this month’s EFY DVD. The execution of HCT on the sha256core project is as follows:

 $  hct rtl

Directory: /home/guest/sha256core/trunk/rtl

vhdl, 6 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| sha_256.vhdl                        29     0       1          0.9847 |
|                      sha_256        29     0       1                 |
| sha_fun.vhdl                        1      1       1          0.3422 |
|                                     1      1       1                 |
| msg_comp.vhdl                       20     0       1          0.4169 |
|                      msg_comp       20     0       1                 |
| dual_mem.vhdl                       7      0       3          0.0832 |
|                      dual_mem       7      0       3                 |
| ff_bank.vhdl                        3      0       2          0.0260 |
|                      ff_bank        3      0       2                 |
| sh_reg.vhdl                         19     0       1          0.6189 |
|                      sh_reg         19     0       1                 |

The “-T” option enables the use of threads to speed up computation. The LZRW1 (Lempel–Ziv Ross Williams) compressor core project implements a lossless data compression algorithm. The output of HCT on this project, without threading and with threads enabled, is shown below:

$ time hct HDL

Directory: /home/guest/lzrw1-compressor-core/trunk/hw/HDL

vhdl, 8 file(s)
real	0m3.725s
user	0m3.612s
sys     0m0.013s

$ time hct HDL -T

Directory: /home/guest/lzrw1-compressor-core/trunk/hw/HDL

vhdl, 8 file(s)
real	0m2.301s
user	0m7.029s
sys     0m0.051s

The supported input options for HCT can be viewed with the “-h” option.

The invocation of HCT can be automated, rechecked for each code check-in that happens to a project repository. The complexity measure is thus recorded periodically. The project team will then be able to monitor, analyse the complexity of each module and decide on any code refactoring strategies.

February 07, 2015 11:30 PM

January 16, 2015


Finally integrating Gcov and Lcov tool into Cppagent build process

This is most probably my final task on Implementing Code Coverage Analysis for Mtconnect Cppagent. In my last post i showed you the how the executable files are generated using Makefiles. In Cppagent the Makefiles are actually autogenerated by a cross-platform Makefile generator tool CMakeTo integrate Gcov and Lcov into the build system we actually need to start from the very beginning of the process which is cmake. The CMake commands are written in CmakeLists.txt files. A minimal cmake file could look something like this. Here we have the test_srcs as the source file and agent_test as the executable.

cmake_minimum_required (VERSION 2.6)


set(test_srcs menu.cpp)

add_executable(agent_test ${test_srcs})

Now lets expand and understand the CMakeLists.txt for cppagent.


This sets the path where cmake should look for files when files or include_directories command is used. The set command is used to set values to the variables. You can print all the available variable out using the following code.

get_cmake_property(_variableNames VARIABLES)
foreach (_variableName ${_variableNames})
    message(STATUS &quot;${_variableName}=${${_variableName}}&quot;)


Next section of the file:

 set(LibXML2_INCLUDE_DIRS ../win32/libxml2-2.9/include )
 set(bits 64)
 set(bits 32)
 file(GLOB LibXML2_LIBRARIES "../win32/libxml2-2.9/lib/libxml2_a_v120_${bits}.lib")
 file(GLOB LibXML2_DEBUG_LIBRARIES ../win32/libxml2-2.9/lib/libxml2d_a_v120_${bits}.lib)
 set(CPPUNIT_INCLUDE_DIR ../win32/cppunit-1.12.1/include)
 file(GLOB CPPUNIT_LIBRARY ../win32/cppunit-1.12.1/lib/cppunitd_v120_a.lib)

Here, we are checking the platform we are working on and accordingly the library variables are being set to the windows based libraries. We will discuss the file command later.

 set(LINUX_LIBRARIES pthread)

Next if the OS platform is Unix based then we execute the command uname as child-process and store the output in CMAKE_SYSTEM_NAME variable. If its a Linux environment., Linux  will be stored in the CMAKE_SYSTEM_NAME variable, hence,  we set the variable LINUX_LIBRARIES to pthread(which is the threading library for linux). Now we find something similar we did in our test CMakeLists.txt. The project command sets the project name, version etc. The next line stores the source file paths to a variable test_src

set( test_srcs file1 file2 ...)
Now we will discuss about the next few lines.
file(GLOB test_headers *.hpp ../agent/*.hpp)

The file command is used to manipulate the files. You can read, write, append files, also GLOB allows globbing of files which is used to generate a list of files matching the expression you give. So here wildcard expression is used to generate a list of all header files in the particular folder *.hpp.

include_directories(../lib ../agent .)

This command basically tells cmake to add the directories specified by it to its list of directories when looking for a file.

find_package(CppUnit REQUIRED)

This command looks for package and loads the settings from it. REQUIRED makes sure the External package is loaded properly else it must stop throwing an error.


add_definitions is where the additional compile time flags are added.

add_executable(agent_test ${test_srcs} ${test_headers})

This line generates an executable target for the project named agent_test and test_src and test_headers are its source and header files respectively. 

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES})

This line links the executable its libraries.

::Gcov & Lcov Integration::

Now that we know our CMake file well, lets make the necessary changes.

Step #1

Add two variables and set the appropriate compile and linking flags for gcov and lcov respectively.

set(GCOV_COMPILE_FLAGS &quot;-fprofile-arcs -ftest-coverage&quot;)
set(GCOV_LINK_FLAGS &quot;-lgcov&quot;)

Step #2

Split the source into two halves one being the unit test source files and the other being the cppagent source files. We are not interested in unit test files’ code coverage.

set( test_srcs test.cpp
set(agent_srcs ../agent/adapter.cpp 

Step #3

Like i told in Step 2 we are not interested in unit test source files. So here we just add the Gcov compile flags to only the cppagent source files. So .gcno files of only the agent source files are generated.


Step #4

Now we also know that for coverage analysis we need to link the “lgcov” library. Therefore, we do this in the following way.

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES} ${GCOV_LINK_FLAGS}) 

Step #5

Since we love things to be automated. I added a target for the make command to automate the whole process of running test and copying the “.gcno” files and moving the “.gcda” files to a folder then running the lcov command to read the files and prepare a easily readable statistics and finally the genhtml command to generate the html output. add_custom_target allows you to add custom target for make(Here i added “cov” as the target name). COMMAND allows you to specify simple bash commands.

add_custom_target( cov
COMMAND [ -d Coverage ]&amp;&amp;rm -rf Coverage/||echo &quot;No folder&quot;
COMMAND mkdir Coverage
COMMAND agent_test
COMMAND cp CMakeFiles/agent_test.dir/__/agent/*.gcno Coverage/
COMMAND mv CMakeFiles/agent_test.dir/__/agent/*.gcda Coverage/
COMMAND cd Coverage&amp;&amp;lcov -t &quot;result&quot; -o -c -d .
COMMAND cd Coverage&amp;&amp;genhtml -o coverage
COMMENT &quot;Generated Coverage Report Successfully!&quot;


Now to build test and generate report.

Step #1 cmake .    // In project root which cppagent/
Step #2 cd test    // since we want to build only test
Step #3 make       // This will build the agent_test executable.
Step #4 make cov   // Runs test, Copies all files to Coverage folder, generates report.

So, we just need to open the Coverage/coverage/index.html to view the analysis report. Final file will look something like this.

by subho at January 16, 2015 10:38 AM

January 15, 2015

Sayan Chowdhury

Fedora 21 Release Party, Bangalore

The Fedora Project announced the release of Fedora 21 on December 09, 2014. To celebrate the release a Fedora 21 release party was organized at Red Hat, Bangalore with the help of Archit and Humble.

The event was scheduled to start at 10AM but people started coming in from 9:30AM itself. Around 40 people turned up among them a good number were college students.

The release party finally started at 10:30AM by Archit who gave an introduction of Fedora. Then, rtnpro gave a talk on what's new in Fedora 21 release and discussed on the Fedora.Next project. He was followed by Neependra Khare who spoke on Project Atomic and Docker.

Then we gathered down and celebrated the release of Fedora 21 by cutting a cake. After the celebration, I started off with explaining the various teams in Fedora and how to approach to contact the teams. I gave them an overview/demo of the wiki pages, mailing list and IRC. I briefly explained about the The final talk was given by Sinny on Basic RPM packaging. The talk covered the basic aspects of RPM packaging and how to create a RPM package of your own.

At 1:30PM, there was an Open House session where everybody participated actively sharing their views, queries and experiences. Fedora 21 LiveCDs were distributed among the attendees.

Thanks to all the organizers for organizing an awesome Fedora 21 release party. Looking forward to be part of other Fedora events in future.

January 15, 2015 12:00 PM

January 13, 2015

Soumya Kanti Chakraborty


FSCONS has become an integral part of Mozilla Sweden Community. The reason being that we started the revived journey of the local community last year at FSCONS. You can read my last year’s blog here. This year during planning for the event it was decided that we will increase our footprint and try to have more than just a booth. Two of our talks got accepted for FSCONS 2014.

Focus Areas/Objectives

  • Increase Mozilla presence in Nordics with a Mozilla booth.
  • Discuss about current l10n activities in Mozilla Sweden and how to increase our base. Try to involve more people to contribute to l10n.
  • Try to recruit new Mozillians and contributors for Mozilla Sweden Community.
  • Showcase Firefox OS and its various devices. Try and make the booth as a marketing podium for Firefox OS devices.
  • Organize a Swedish Mozilla community get together to discuss about pitfalls and road ahead.

Event Takeaways

  • With bit of last minute budget constraints we were still able to pull Åke, Martin and myself travelling to Gothenburg for the event. Thanks to all the folks to help in adjusting other needs which finally kept us within the planned budget for the event.
  • Me and Åke spoke about “Webmaker, Connected Learning and Libraries” during Sunday morning. We were in the main keynote hall and had a full on attendance during the whole session. The talk went well with us explaining about how Digital Literacy and Connected learning acts as a epicenter of present day knowledge map. There was lot of questions asked and hopefully we lived up to the expectation in answering them.
  • The next talk same day was about the journey of Mozilla community starting from last FSCONS. Oliver was the co speaker with me. Sole purpose of having such a session was to make the attendees aware of Mozilla Sweden community, attract more contributors, share the hiccups, success metrics of our journey and all in all to be more visible across communities in Sweden. The talk went full house as well :)
  • The conference mainly runs for two days, and we were fortunate to have our talks on 2nd day. Why I say fortunate because on 1st day normally people are packed with enthusiasm, willingness to know/learn and are very patient and curious about everything they perceive, but 2nd day is lousy (and Sunday). So our booth was super busy on Saturday (1st day) with full on questions, answers and feedback. The 2nd day we had our sessions (talks) which went full house (lucky !) and after that our booth again got lot of attention due to post session feedback, questions and getting along. So when other booths were doing so-so on 2nd day we kept the fire burning :)
  • Our community talk went right on the spot, we got 4-5 queries about how to contribute and where to start looking for things which interests them in Mozilla. We took no time to respond and provided them with all needed details. (4-5 people coming forward to contribute is big thing for us, considering we are a small active community).
  • 2nd day we also had a community meeting to discuss the roles, the task list and things to do for future in l10n for Sweden. Åke, Martin, me and Oliver joined in and we really had an effective l10n meetup.
  • I spoke with the Language team of the Gothenburg University (FSCONS Venue) and they promised to help us in securing more l10n contributions for Mozilla in the days ahead.
  • This time we had multiple flame devices and all flashed to the latest Firefox OS. They were a crowd puller specially as Flame is not so common here in Sweden, few people who have Firefox OS phones got it from Ebay (ZTE Open). Crowd coming to our booth were curious and took a lot of time playing with the device and asking a list of similar questions. They were super excited to see Flame and Firefox OS advancing the charts so fast.


FSCONS this year was seemingly much more successful than last year. We tried to fulfill all the goals and agenda metrics we set for the event and were very happy to complete it so satisfactorily. Thanks to Åke. Martin and Oliver to lend a big hand in the whole event, without whom it would have not been so worthwhile this year.

We will keep coming to FSCONS to sort of mark the community anniversary and increase the community presence in Nordics.

Below are photo sets –

by Chakraborty SoumyaKanti at January 13, 2015 12:41 AM

December 27, 2014

Sayan Chowdhury

Migrate a running process into tmux

Being a regular tmux user. Migrating a running process into tmux using reptyr comes handy.

reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

The package available in Fedora/Ubuntu repositories.

% sudo yum install -y reptyr        # For Fedora users
% sudo apt-get install -y reptyr    # For Ubuntu users

The steps to migrate a process is

  • Sending the current foreground job to the background using CTRL-Z.
  • List all the background jobs using `jobs -l`. This will get you the PID
% jobs -l
[1]  + 16189 suspended  vim foobar.rst

Here the PID is 16189

  • Start a new tmux or screen session. I will be using tmux
% tmux
  • Reattach the background process using
% reptyr 16189

If this error appears

Unable to attach to pid 16189: Operation not permitted
The kernel denied permission while attaching

Then type in the following command as root.

% echo 0 > /proc/sys/kernel/yama/ptrace_scope

These are commands are compatible with screen also

December 27, 2014 12:00 PM

December 26, 2014

Soumya Kanti Chakraborty


Last few months for me has been thoroughly hectic. In college life “Stack theory in Data Structures” was a must learn and it came as an exceptionally handy utility in my whole professional career as a developer.

Now implementing the stack theory concept “Last in Fast Out (LIFO)” while writing my blogs I will start from the recent activities and will roll back to where my last post ended.

I am still struggling with my time management skills, specially as I’m a procrastinator. Nevertheless will keep on writing my blog :)

by Chakraborty SoumyaKanti at December 26, 2014 03:42 AM