Planet dgplug

February 26, 2015

Kushal Das

What is a hackathon or hackfest? Few more tips for proposals

According to Wikipedia, A hackathon (also known as a hack day, hackfest or codefest) is an event in which computer programmers and others involved in software development, including graphic designers, interface designers and project managers, collaborate intensively on software projects. Let us go through few points from this definition.

  • it is an event about collaboration.
  • it involves not only programmers, but designers, docs and other people.
  • it is about software projects.

We can also see that people work intensively on the projects. It can be one project, or people can work as teams on different projects. In Fedora land, the most common example of hackathon is “Fedora Activity Days” or FADs. Where a group of contributors sit together in a place and work on the project intensively. The last example is the Design FAD which we had around a month back, where the design team worked on fixing the their goals and workflows and other related things.

One should keep these things in mind while submitting a proposal for FUDCON or actually any other conference. If you want to teach about any particular technology or tool, you should put that as a workshop proposal than a hackfest or hackathon.

Then which one is a good topic for hackfest during Fudcon? Say you want to work on the speed up of the boot time of Fedora. You may want to design 5 great icons for the projects you love. If you love photography, may be you want to build a camera using a RaspberryPi and some nice Python code. Another good option is to ask for a list of bugs from the applications under Fedora apps/infrastructure/releng team and then work on fixing them during the conference.

In both hackfest or workshop proposals, there are a few points which must be present in your proposal. Things like

  • Who are the target audience for the workshop?
  • what version of Fedora must they have in their laptops?
  • which all packages should they pre-install in their computer before coming to the conference?
  • Do they need to know any particular technology or programming language or tool to take part in the workshop or hackfest?
  • Make sure that you submit proposals about the projects where you do contribute upstream.

CFP is open still 9th March, so go ahead and submit awesome proposals.

by Kushal Das at February 26, 2015 01:52 PM

February 25, 2015

Kushal Das

FUDCON Pune 2015 CFP is open

FUDCON, the Fedora Users and Developers Conference is going to happen next in Pune, in India, from 26th June to 28th June in the Maharashtra Institute of Technology College of Engineering (MIT COE). The call for proposals (CFP) is already out and 9th March is the last date to submit a talk/workshop. If you are working on any upstream project, you may want to talk about your work to a technical crowd in the conference. If you are a student, and you want to talk about the latest patches you have submitted to the upstream project, this is the right place to do so. May be you never talked in front of a crowd like this before, but you can start doing this by submitting a talk in the FUDCON.

A few tips for your talk/workshop proposal

  • Write about your intended audience, is this something useful for the students? or for the system administrators? Be clear about who are your target audience.
  • Provide a talk outline, provide points about what all you want to speak, it is better that you provide a time estimate in the outline.
  • What will the attendees get out of your talk?
  • Provide links to the projects, source code, blogs, presentation you gave before. These will add more value to the presentation.
  • Submit your proposal early, that way more people from the talk selection committee will be able to go through your talk proposal.
  • Make sure you have a recorded copy of any demo you want to do on stage, because it is generally a bad idea to do a demo during a live talk. Things can go wrong very fast.
  • Write your speaker biography properly. Do not assume that everyone knows you. Give links to the all other talks you gave before, any link to the recorded videos is also very nice thing to have in the biography.
  • Make sure that you write your proposal for the attendees of your talk. They will be the measurement of success for the talk/workshop. (I am not talking about numbers, but the quality of knowledge sharing).
  • Try to avoid giving a generic talk like introduction to Open Source/Linux.
  • In case you are talking about any upstream project, choose a project where you have enough contribution in that project. That way the selection committee will know that you are a good person to give that talk. We know it is very tempting to talk about the latest fancy and shiny project, but please do so only if you are an upstream contributor.
  • Please do not submit talks on your products. This is a community event, not a company meet.
  • Write more in your talk proposal. It is never bad to explain or communicate more in a talk/workshop proposal.

In case you need help with your proposal, you can show it to the other community members before submitting it. You can always find a few of us in #fedora-india IRC channel.

So do not waste time, go ahead and submit a talk or workshop proposal.

by Kushal Das at February 25, 2015 04:45 PM

February 23, 2015

Chandan Kumar

February Python Pune Meetup: 21.02.2015

On 21st Feb, 2015, We organized February Python Pune Meetup at Webonise Lab, Bavdhan (Pune, India). Here is the event report of February Python Pune Meetup. We had selected 2 workshops, 1 talk-cum-workshop, 1 talk and 4 lightening talks for this meetup. More than 150 registered for the meetup but only 70 made it at the venue.

This time we started on time by 10:00 A:M, I had given a small talk on aim and objectives of Python Pune Meetup where i covered about PSF, PSSI, Python Pune Meetup, How one can contribute to Python language and Python projects and how it adds values to your career.

By 10:15 A:m, Anurag presented a talk on Writing flexible filesystems with FUSE-Python. He started with UNIX based file system, introduction to fuse-python and how to use it with directory operations and reading files. In the end he created toyfs and demoed it in the lightening talk by reading files.

Then after a short break, we again started with Django workshop by Mukesh Shukla. He continued the Django workshop from the previous meetup. He explained about Django models, creating migrations and apply migrations on an existing project by applying CRUD operations on Django models by using Django-south for Django<1.7.

Again we had a short break and by 12:30 P:M, Mayuresh started a talk-cum-workshop on Integrating Python with Firebase. He started with introduction to Firebase and how it is different from database. He created a demo chat application by using Firebase in python which performs basic read and write functionality. Here is the hosted ui for chat client.

All the workshops were quite interesting.

And by 01:20 P:M, Rishabh presented Automation using Ansible workshop. He started with the basics of ansible, modules and variables and how to create an ansible playback. To understand these things in a better way, he created an OpenStack instance and written an ansible playbook to deploy gitlab local instance on Fedora 21.

And finally here comes the lightening talks. Aditya gave a nice demo of HeatMaps using pandas and Ipython-Notebook. Harsha demoed about Easyengine a python cli tool to deploy wordpress sites easily. Hardik a college student had used selinum driver and written a python script PyAutoLogOn to login into his college WiFi automatically in every 5 minutes (as the wifi get disconnected each time and ask for log-in) and demoed it. That was a real fun in python. Lastly Anurag presented TOYFS, an implementation of FUSE-python.

Here it comes to an end of an awesome meetup with awesome feedback with a group photo.

We are soon coming up with a developer sprint for python related projects in March.

Thanks to Mukesh, Nishant, Vijay for helping me in hosting the meetup and Webonise lab for providing venue for hosting this meetup. Thanks to volunteers, attendees and speakers for making the event successful.

by Chandan Kumar at February 23, 2015 01:17 PM

February 19, 2015

Chandan Kumar

January Python Pune Meetup:31.01.2015

After successful completion of December Python Pune meetup, I allowed myself to go ahead to host another python meetup in January a.k.a January Python Pune Meetup in the new year 2015 and finally it happened on 31st Jan, 2015 at Red Hat, Pune(India) .

Here is the event report of the January Python Pune meetup. The event started a bit late by 10:15 with formal agenda of the meetup. 75 people attended this meetup (which has increased from the last meetup). About 50 % of the attendees turned up from the last meetup and they are basically final year college students and professionals. Then by 10:20 A:M, i started with a quick of recap from Python 101 workshop where i spoke about use case of Functions, Models, File handling, Exceptions and Class through hands-on. There was a lot of discussion on why we use __init__() with in a class with respect to other language.

After a short break, by 11:00 A:M Django workshop was started by Tejas Sathe. It started with the introduction of web development, how Django is writing web application using Django-admin. He created a simple web application which takes user information using Forms, stores the information in the sqlite3 database using Django Models and showed the same stored information in the other page by explaining how responses and URL mapping is done in Django.

By 01:15 P:M, with a short break, we have our two talk sessions. Jaidev spoke about Categorical Data Analysis in Python . He explained what is Categorical Data by taking a problem statement related to meetup data, its features, how to measure it, and analyzing the result out of measurement.

Finally by 01:35 P:M Rohan gave a brief introduction of oslo libraries. It is a set of python libraries containing code shared by OpenStack projects. Currently there are 27 libraries. We are planning to demonstrate the usecases of each libraries in upcoming meeting.

The event ended on time and we had open the floor for discussion and feedback and ditributed F21 workstation DVDS.

The meetup went well and feedback was good. People were complaining us to move to a new place where more than 100 people can attend and learn new stuffs. For that we are looking for sponsors who can provide the venue.

There is a plan to introduce lightening talks where attendee may show their cool python application s/he has developed or showing popular libraries usecases.

Thanks to Red Hat, Pune (India) for the venue and arrangements, volunteers, speakers and attendees for making the event successful.

Below is some happy moments from the meetup.

See you soon all in February Pune Python Meetup at Webonise Lab, Bavdhan, Pune (India) on 21st Feb, 2015 :).

by Chandan Kumar at February 19, 2015 03:55 AM

February 16, 2015

Arpita Roy

Growing up :)

Date – 14th Feb 2015 ( had been writing in parts again, finished today)
To be honest, i am myself disappointed that i am not able to write in daily :P though i want to be ” consistent” ( well, i try ) but my life’s biggest barrier pops up ( College :/ )

Coming to today, finally i am happy that the calendar has got one day which is ” not for me ” :D ( One is smart enough to get my point :P )
Okay, shifting to what i wanted to mention…
I am on a ride, which is taking me really really up.. It’s fun.. but ” to balance ” is what keeps importance !!
Let me enlighten you with my C experience first.. would use just a word ” exciting ” :) there is a yet a lot to discover..
Talking about what i came across in the past few days.. Few Programs, let me mention, few simple programs ( which seemed simple after i got the solutions :P )
You atleast need to know the basics of programming. Well, C is serving a perfect meal.
A beginner cannot prosper without a  few things ( this is MY current situation) – lots and lots of questions, why this, why not this, confusions, and plenty of thinking.. The good and the best part is, for every problem there are solutions. You need to find them.
Yes, i am finding my part of solutions and they are a great success always (:
Few days back, i had my C class, it was 17:30 when i entered my dorm.. ( with infinite questions in my mind )
We were taught a few programs that day, about switch case, break and one including about loops.
This was the first time when i understood, a simple code can be written in multiple ways.
i will share mine.

#include <stdio.h>
int main()
int NUM, REM, SUM = 0;
printf (“ENTER A NUMBER: “);
scanf (“%d” , &NUM);
REM = NUM % 10;
NUM = NUM/10;
} while ( NUM > 0);
printf (“%d”, SUM);

return 0;
The same code can be written using while as well, which was a much better preference than do while.. and the most important thing that i learnt was about the difference between ” while and do-while” and also when to use the former and when to strike on the latter..
Thanks to kushal da, bnprk and darkowlzz.
I was given a lot of code to write – one was to print the fibonacci series. The funny part was, python made me do that in three to four lines and it took a little more lines for C to do the same.
At the end of every day it’s good to see that you learned a bunch of things. You feel good knowing you solved things, that may be is tough for the person sitting next to you ( Please don’t hesitate to help)
I shall share my programs ( very soon, hopefully tomorrow)
P.S – This one is getting long :P I shall switch in tomorrow and also thank you readers :)

by Arpita Roy at February 16, 2015 07:29 PM

February 08, 2015

Arpita Roy


Hey :)
I went on a break :P Well, not for a very long time i suppose.. i am here again :)
Talking about the last few days i spent….. they were total torture.. Reason ??  Yes… ” college ” :(
Days are passing like seconds and the watch seems to have lost it’s minutes and seconds hand  >_<
At the end of the day, you keep wondering ” what on you spent your entire day ” and as an answer you get ” oh, i wasted my today’s day, that certainly won’t happen from tomorrow “.. and trust me, you think the same again the very next day :P
Speaking a little about college, ( yes, i still hate college )
Like i said, i was introduced to a very new language C.. :) So far, the language has kept my enthusiasm on the peak.. hope it serves me better till the end.. ( Fingers Crossed )
Next, Python.. i really find a very little time to spend on it.. ( after spending my entire day on boring and silly subjects :/ ) still, trying my best to make use of the little time i get…
BCREC ( my college ) is organizing a technical fest.. A seminar was arranged for us, so that we are aware of the events being organized in the fest.. though few of the events are worth participating.. looking forward :)
A day begins and the day ends.. you have to play your role within the boundaries..
P.S – I am starving for holidays…

by Arpita Roy at February 08, 2015 07:33 PM

February 07, 2015

Shakthi Kannan

HDL Complexity Tool

[Published in Electronics For You (EFY) magazine, June 2014 edition.]

HCT stands for HDL Complexity Tool, where HDL stands for Hardware Description Language. HCT provides scores that represent the complexity of modules present in integrated circuit (IC) designs. It is written in Perl and released under the GPLv3 and LGPLv3 license. It employs McCabe Cyclomatic Complexity that uses the control flow graph of the program source code to determine the complexity.

There are various factors for measuring the complexity of HDL models such as size, nesting, modularity, and timing. The measured metrics can help designers in refactoring their code, and also help managers to plan project schedules, and allocate resources, accordingly. You can run the tool from the GNU/Linux terminal for Verilog, VHDL, and CDL (Computer Design Language) files or directory sources. HCT can be installed on Fedora using the command:

$ sudo yum install hct

After installation, consider the example project of uart2spi written in Verilog, which is included in this month’s EFY DVD. It implements a simple core for a UART interface, and an internal SPI bus. The uart2spi folder contains rtl/spi under the file directory in your PC: /home/guest/uart2spi/trunk/rtl/spi. Run the HCT tool on the rtl/spi Verilog sources as follows:

$ hct rtl/spi

We get the output:

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| spi_ctl.v                           20     1       1          0.1724 |
|                      spi_ctl        20     1       1                 |
| spi_core.v                          0      0       1          0.0076 |
|                      spi_core       0      0       1                 |
| spi_cfg.v                           0      0       1          0.0076 |
|                      spi_cfg        0      0       1                 |
| spi_if.v                            15     3       1          0.0994 |
|                      spi_if         15     3       1                 |

The output includes various attributes that are described below:

  • FILENAME is the file that is being parsed. The parser uses the file name extension to recognize the programming language.

  • MODULE refers to the specific module present in the file. A file can contain many modules.

  • IO refers to the input/output registers used in the module.

  • NET includes the network entities declared in the given module. For Verilog, it can be ‘wire’, ‘tri’, ‘supply0’ etc.

  • MCCABE provides the McCabe Cyclomatic Complexity of the module or file.

  • TIME refers to the time taken to process the file.

A specific metric can be excluded from the output using the “–output-exclude=LIST” option. For example, type the following command on a GNU/Linux terminal:

$ hct --output-exclude=TIME rtl/spi 

The output will be;

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME             | MODULE         | IO     | NET     | MCCABE    |
| spi_ctl.v                               20       1         1         |
|                        spi_ctl          20       1         1         |
| spi_core.v                              0        0         1         |
|                        spi_core         0        0         1         |
| spi_cfg.v                               0        0         1         |
|                        spi_cfg          0        0         1         |
| spi_if.v                                15       3         1         |
|                        spi_if           15       3         1         |

If you want only the score to be listed, you can remove the MODULE listing with the “–output-no-modules” option:

$ hct --output-no-modules rtl/spi

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME              | IO      | NET      | MCCABE      | TIME      |
| spi_ctl.v               20        1          1             0.16803   |
| spi_core.v              0         0          1             0.007434  |
| spi_cfg.v               0         0          1             0.00755   |
| spi_if.v                15        3          1             0.097721  |

The tool can be run on individual files, or recursively on subdirectories with the “-R” option. The output the entire uart2spi project sources is given below:

$ hct -R rtl

Directory: /home/guest/uart2spi/trunk/rtl/uart_core

verilog, 4 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| uart_rxfsm.v                        10     0       1          0.1379 |
|                      uart_rxfsm     10     0       1                 |
| clk_ctl.v                           0      0       1          0.0146 |
|                      clk_ctl        0      0       1                 |
| uart_core.v                         18     1       1          0.1291 |
|                      uart_core      18     1       1                 |
| uart_txfsm.v                        9      0       1          0.1129 |
|                      uart_txfsm     9      0       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/top

verilog, 1 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| top.v                               16     0       1          0.0827 |
|                      top            16     0       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| spi_ctl.v                           20     1       1          0.1645 |
|                      spi_ctl        20     1       1                 |
| spi_core.v                          0      0       1          0.0074 |
|                      spi_core       0      0       1                 |
| spi_cfg.v                           0      0       1          0.0073 |
|                      spi_cfg        0      0       1                 |
| spi_if.v                            15     3       1          0.0983 |
|                      spi_if         15     3       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/lib

verilog, 1 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| registers.v                         5      0       1          0.0382 |
|                      bit_register   5      0       1                 |

Directory: /home/guest/uart2spi/trunk/rtl/msg_hand

verilog, 1 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| uart_msg_handler.v                  0      0       1          0.0192 |
|                      uart_m~ndler   0      0       1                 |

The default behaviour is to dump the output to the terminal. It can be redirected to a file with the “–output-file=FILE” option. You can also specify an output file format, such as “csv” with the “–output-format=FORMAT” option:

$ hct --output-file=/home/guest/project-metrics.csv --output-format=csv rtl/spi 

$ cat /home/guest/project-metrics.csv

Directory: /home/guest/uart2spi/trunk/rtl/spi

verilog, 4 file(s)

 spi_ctl.v   ,           , 20   , 1    , 1       , 110   , 48             , 0.1644
             , spi_ctl   , 20   , 1    , 1       , 68    , 6              ,
 spi_core.v  ,           , 0    , 0    , 1       , 46    , 43             , 0.0073
             , spi_core  , 0    , 0    , 1       , 4     , 1              ,
 spi_cfg.v   ,           , 0    , 0    , 1       , 46    , 43             , 0.0075
             , spi_cfg   , 0    , 0    , 1       , 4     , 1              ,
 spi_if.v    ,           , 15   , 3    , 1       , 80    , 44             , 0.0948
             , spi_if    , 15   , 3    , 1       , 38    , 2              ,

There are various yyparse options that are helpful to understand the lexical parsing of the source code. They can be invoked using the following command:

$ hct --yydebug=NN sources

The NN options and their meaning is listed below:

0x01 Lexical tokens
0x02 Information on States
0x04 Shift, reduce, accept driver actions
0x08 Dump of the parse stack
0x16 Tracing for error recovery
0x31 Complete output for debugging

HCT can also be used with VHDL, and Cyclicity CDL (Cycle Description Language) programs. For VHDL, the filenames must end with a .vhdl extension. You can rename .vhd files recursively in a directory (in Bash, for example) using the following script:

for file in `find $1 -name "*.vhd"`
  mv $file ${file/.vhd/.vhdl}

The “$1” refers to the project source directory that is passed as an argument to the script. Let us take the example of sha256 core written in VHDL, which is also included in this month’s EFY DVD. The execution of HCT on the sha256core project is as follows:

 $  hct rtl

Directory: /home/guest/sha256core/trunk/rtl

vhdl, 6 file(s)
| FILENAME           | MODULE       | IO   | NET   | MCCABE   | TIME   |
| sha_256.vhdl                        29     0       1          0.9847 |
|                      sha_256        29     0       1                 |
| sha_fun.vhdl                        1      1       1          0.3422 |
|                                     1      1       1                 |
| msg_comp.vhdl                       20     0       1          0.4169 |
|                      msg_comp       20     0       1                 |
| dual_mem.vhdl                       7      0       3          0.0832 |
|                      dual_mem       7      0       3                 |
| ff_bank.vhdl                        3      0       2          0.0260 |
|                      ff_bank        3      0       2                 |
| sh_reg.vhdl                         19     0       1          0.6189 |
|                      sh_reg         19     0       1                 |

The “-T” option enables the use of threads to speed up computation. The LZRW1 (Lempel–Ziv Ross Williams) compressor core project implements a lossless data compression algorithm. The output of HCT on this project, without threading and with threads enabled, is shown below:

$ time hct HDL

Directory: /home/guest/lzrw1-compressor-core/trunk/hw/HDL

vhdl, 8 file(s)
real	0m3.725s
user	0m3.612s
sys     0m0.013s

$ time hct HDL -T

Directory: /home/guest/lzrw1-compressor-core/trunk/hw/HDL

vhdl, 8 file(s)
real	0m2.301s
user	0m7.029s
sys     0m0.051s

The supported input options for HCT can be viewed with the “-h” option.

The invocation of HCT can be automated, rechecked for each code check-in that happens to a project repository. The complexity measure is thus recorded periodically. The project team will then be able to monitor, analyse the complexity of each module and decide on any code refactoring strategies.

February 07, 2015 11:30 PM

January 20, 2015

Shakthi Kannan

GNU Unified Parallel C

[Published in Open Source For You (OSFY) magazine, May 2014 edition.]

This article guides readers through the installation of GNU Unified Parallel C, which is designed for high performance computing on large scale parallel machines.

GNU Unified Parallel C is an extension to the GNU C compiler (GCC), which supports execution of Unified Parallel C (UPC) programs. UPC uses the Partitioned Global Address Space (PGAS) model for its implementation. The current version of UPC is 1.2, and a 1.3 draft specification is available. GNU UPC is released under the GPL license, while, the UPC specification is released under the new BSD license. To install it on Fedora, you need to first install the gupc repository:

$ sudo yum install

You can then install the gupc RPM using the following command:

$ sudo yum install gupc-gcc-upc

The installation directory is /usr/local/gupc. You will also require the numactl (library for tuning Non-Uniform Memory Access machines) development packages:

$ sudo yum install numactl-devel numactl-libs

To add the installation directory to your environment, install the environment-modules package:

$ sudo yum install environment-modules

You can then load the gupc module with:

# module load gupc-x86_64

Consider the following simple ‘hello world’ example:

#include <stdio.h>

int main()
   printf("Hello World\n");
   return 0;

You can compile it using:

# gupc hello.c -o hello

Then run it with:

# ./hello -fupc-threads-5

Hello World
Hello World
Hello World
Hello World
Hello World

The argument -fupc-threads-N specifies the number of threads to be run. The program can also be executed using:

# ./hello -n 5

The gupc compiler provides a number of compile and run-time options. The ’-v’ option produces a verbose output of the compilation steps. It also gives information on GNU UPC. An example of such an output is shown below:

# gupc hello.c -o hello -v

Driving: gupc -x upc hello.c -o hello -v -fupc-link
Using built-in specs.
Target: x86_64-redhat-linux
Configured with: ...
Thread model: posix
gcc version 4.8.0 20130311 (GNU UPC 4.8.0-3) (GCC) 
COLLECT_GCC_OPTIONS='-o' 'hello' '-v' '-fupc-link' '-mtune=generic' '-march=x86-64'
GNU UPC (GCC) version 4.8.0 20130311 (GNU UPC 4.8.0-3) (x86_64-redhat-linux)
	compiled by GNU C version 4.8.0 20130311 (GNU UPC 4.8.0-3),
        GMP version 5.0.5, MPFR version 3.1.1, MPC version 0.9
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
#include "..." search starts here:
#include <...> search starts here:
End of search list.
GNU UPC (GCC) version 4.8.0 20130311 (GNU UPC 4.8.0-3) (x86_64-redhat-linux)
	compiled by GNU C version 4.8.0 20130311 (GNU UPC 4.8.0-3), 
        GMP version 5.0.5, MPFR version 3.1.1, MPC version 0.9
GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072
Compiler executable checksum: 9db6d080c84dee663b5eb4965bf5012f
COLLECT_GCC_OPTIONS='-o' 'hello' '-v' '-fupc-link' '-mtune=generic' '-march=x86-64'
 as -v --64 -o /tmp/cccSYlmb.o /tmp/ccTdo4Ku.s
COLLECT_GCC_OPTIONS='-o' 'hello' '-v' '-fupc-link' '-mtune=generic' '-march=x86-64'

The -g option will generate debug information. To output debugging symbol information in DWARF-2 (Debugging With Attributed Record Formats), use the -dwarf-2-upc option. This can be used with GDB-UPC, a GNU debugger that supports UPC.

The -fupc-debug option will also generate filename and the line numbers in the output.

The optimization levels are similar to the ones supported by GCC: ’-O0’, ’-O1’, ’-O2’, and ’-O3’.

Variables that are shared among threads are declared using the ‘shared’ keyword. Examples include:

shared int i;
shared int a[THREADS];
shared char *p;

‘THREADS’ is a reserved keyword that represents the number of threads that will get executed run-time. Consider a simple vector addition example:

#include <upc_relaxed.h>
#include <stdio.h>

shared int a[THREADS];
shared int b[THREADS];
shared int vsum[THREADS];

  int i;

  /* Initialization */
  for (i=0; i<THREADS; i++) {
    a[i] = i + 1;               /* a[] = {1, 2, 3, 4, 5}; */
    b[i] = THREADS - i;         /* b[] = {5, 4, 3, 2, 1}; */

  /* Computation */
  for (i=0; i<THREADS; i++)
    if (MYTHREAD == i % THREADS)
      vsum[i] = a[i] + b[i];


  /* Output */
  if (MYTHREAD == 0) {
    for (i=0; i<THREADS; i++)
      printf("%d ", vsum[i]);

  return 0;

‘MYTHREAD’ indicates the thread that is currently running. upc_barrier is a blocking synchronization primitive that ensures that all threads complete before proceeding further. Only one thread is required to print the output, and THREAD 0 is used for the same. The program can be compiled, and executed using:

# gupc vector_addition.c -o vector_addition
# ./vector_addition -n 5

6 6 6 6 6

The computation loop in the above code can be simplified with the upc_forall statement:

#include <upc_relaxed.h>
#include <stdio.h>

shared int a[THREADS];
shared int b[THREADS];
shared int vsum[THREADS];

  int i;

  /* Initialization */
  for (i=0; i<THREADS; i++) {
    a[i] = i + 1;               /* a[] = {1, 2, 3, 4, 5}; */
    b[i] = THREADS - i;         /* b[] = {5, 4, 3, 2, 1}; */

  /* Computation */
  upc_forall(i=0; i<THREADS; i++; i)
      vsum[i] = a[i] + b[i];


  if (MYTHREAD == 0) {
    for (i=0; i<THREADS; i++)
      printf("%d ", vsum[i]);

  return 0;

The upc_forall construct is similar to a for loop, except, that it accepts a fourth parameter, the affinity field. It indicates the thread on which the computation runs. It can be an integer that is internally represented as integer % THREADS, or it can be an address corresponding to a thread. The program can be compiled and tested with:

# gupc upc_vector_addition.c -o upc_vector_addition
# ./upc_vector_addition -n 5

6 6 6 6 6

The same example can also be implemented using shared pointers:

#include <upc_relaxed.h>
#include <stdio.h>

shared int a[THREADS];
shared int b[THREADS];
shared int vsum[THREADS];

  int i;
  shared int *p1, *p2;

  p1 = a;
  p2 = b;

  /* Initialization */
  for (i=0; i<THREADS; i++) {
    *(p1 + i) = i + 1;          /* a[] = {1, 2, 3, 4, 5}; */
    *(p2 + i) = THREADS - i;    /* b[] = {5, 4, 3, 2, 1}; */

  /* Computation */
  upc_forall(i=0; i<THREADS; i++, p1++, p2++; i)
      vsum[i] = *p1 + *p2;


  if (MYTHREAD == 0)
	for (i = 0; i < THREADS; i++)
		printf("%d ", vsum[i]);

  return 0;
# gupc pointer_vector_addition.c -o pointer_vector_addition
# ./pointer_vector_addition -n 5

6 6 6 6 6

Memory can also be allocated dynamically. The upc_all_alloc function will allocate collective global memory that is shared among threads. A collective function will be invoked by every thread. The upc_global_alloc function will allocate non-collective global memory which will be different for all threads in the shared address space. The upc_alloc function will allocate local memory for a thread. Their respective declarations are as follows:

shared void *upc_all_alloc (size_t nblocks, size_t nbytes);
shared void *upc_global_alloc (size_t nblocks, size_t nbytes);
shared void *upc_alloc (size_t nbytes);

To protect access to shared data, you can use the following synchronization locks:

void upc_lock (upc_lock_t *l)
int upc_lock_attempt (upc_lock_t *l)
void upc_unlock(upc_lock_t *l)

There are two types of barriers for synchronizing code. The upc_barrier construct is blocking. The non-blocking barrier uses upc_notify (non-blocking), and upc_wait (blocking) constructs. For example:

#include <upc_relaxed.h>
#include <stdio.h>

  int i;

  for (i=0; i<THREADS; i++) {

    if (i == MYTHREAD)
      printf("Thread: %d\n", MYTHREAD);


  return 0;

The corresponding output is shown below:

# gupc count.c -o count
# ./count -n 5

Thread:  0
Thread:  1
Thread:  2
Thread:  3
Thread:  4

You can refer the GUPC user guide for more information.

January 20, 2015 03:00 PM

January 16, 2015


Finally integrating Gcov and Lcov tool into Cppagent build process

This is most probably my final task on Implementing Code Coverage Analysis for Mtconnect Cppagent. In my last post i showed you the how the executable files are generated using Makefiles. In Cppagent the Makefiles are actually autogenerated by a cross-platform Makefile generator tool CMakeTo integrate Gcov and Lcov into the build system we actually need to start from the very beginning of the process which is cmake. The CMake commands are written in CmakeLists.txt files. A minimal cmake file could look something like this. Here we have the test_srcs as the source file and agent_test as the executable.

cmake_minimum_required (VERSION 2.6)


set(test_srcs menu.cpp)

add_executable(agent_test ${test_srcs})

Now lets expand and understand the CMakeLists.txt for cppagent.


This sets the path where cmake should look for files when files or include_directories command is used. The set command is used to set values to the variables. You can print all the available variable out using the following code.

get_cmake_property(_variableNames VARIABLES)
foreach (_variableName ${_variableNames})
    message(STATUS &quot;${_variableName}=${${_variableName}}&quot;)


Next section of the file:

 set(LibXML2_INCLUDE_DIRS ../win32/libxml2-2.9/include )
 set(bits 64)
 set(bits 32)
 file(GLOB LibXML2_LIBRARIES "../win32/libxml2-2.9/lib/libxml2_a_v120_${bits}.lib")
 file(GLOB LibXML2_DEBUG_LIBRARIES ../win32/libxml2-2.9/lib/libxml2d_a_v120_${bits}.lib)
 set(CPPUNIT_INCLUDE_DIR ../win32/cppunit-1.12.1/include)
 file(GLOB CPPUNIT_LIBRARY ../win32/cppunit-1.12.1/lib/cppunitd_v120_a.lib)

Here, we are checking the platform we are working on and accordingly the library variables are being set to the windows based libraries. We will discuss the file command later.

 set(LINUX_LIBRARIES pthread)

Next if the OS platform is Unix based then we execute the command uname as child-process and store the output in CMAKE_SYSTEM_NAME variable. If its a Linux environment., Linux  will be stored in the CMAKE_SYSTEM_NAME variable, hence,  we set the variable LINUX_LIBRARIES to pthread(which is the threading library for linux). Now we find something similar we did in our test CMakeLists.txt. The project command sets the project name, version etc. The next line stores the source file paths to a variable test_src

set( test_srcs file1 file2 ...)
Now we will discuss about the next few lines.
file(GLOB test_headers *.hpp ../agent/*.hpp)

The file command is used to manipulate the files. You can read, write, append files, also GLOB allows globbing of files which is used to generate a list of files matching the expression you give. So here wildcard expression is used to generate a list of all header files in the particular folder *.hpp.

include_directories(../lib ../agent .)

This command basically tells cmake to add the directories specified by it to its list of directories when looking for a file.

find_package(CppUnit REQUIRED)

This command looks for package and loads the settings from it. REQUIRED makes sure the External package is loaded properly else it must stop throwing an error.


add_definitions is where the additional compile time flags are added.

add_executable(agent_test ${test_srcs} ${test_headers})

This line generates an executable target for the project named agent_test and test_src and test_headers are its source and header files respectively. 

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES})

This line links the executable its libraries.

::Gcov & Lcov Integration::

Now that we know our CMake file well, lets make the necessary changes.

Step #1

Add two variables and set the appropriate compile and linking flags for gcov and lcov respectively.

set(GCOV_COMPILE_FLAGS &quot;-fprofile-arcs -ftest-coverage&quot;)
set(GCOV_LINK_FLAGS &quot;-lgcov&quot;)

Step #2

Split the source into two halves one being the unit test source files and the other being the cppagent source files. We are not interested in unit test files’ code coverage.

set( test_srcs test.cpp
set(agent_srcs ../agent/adapter.cpp 

Step #3

Like i told in Step 2 we are not interested in unit test source files. So here we just add the Gcov compile flags to only the cppagent source files. So .gcno files of only the agent source files are generated.


Step #4

Now we also know that for coverage analysis we need to link the “lgcov” library. Therefore, we do this in the following way.

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES} ${GCOV_LINK_FLAGS}) 

Step #5

Since we love things to be automated. I added a target for the make command to automate the whole process of running test and copying the “.gcno” files and moving the “.gcda” files to a folder then running the lcov command to read the files and prepare a easily readable statistics and finally the genhtml command to generate the html output. add_custom_target allows you to add custom target for make(Here i added “cov” as the target name). COMMAND allows you to specify simple bash commands.

add_custom_target( cov
COMMAND [ -d Coverage ]&amp;&amp;rm -rf Coverage/||echo &quot;No folder&quot;
COMMAND mkdir Coverage
COMMAND agent_test
COMMAND cp CMakeFiles/agent_test.dir/__/agent/*.gcno Coverage/
COMMAND mv CMakeFiles/agent_test.dir/__/agent/*.gcda Coverage/
COMMAND cd Coverage&amp;&amp;lcov -t &quot;result&quot; -o -c -d .
COMMAND cd Coverage&amp;&amp;genhtml -o coverage
COMMENT &quot;Generated Coverage Report Successfully!&quot;


Now to build test and generate report.

Step #1 cmake .    // In project root which cppagent/
Step #2 cd test    // since we want to build only test
Step #3 make       // This will build the agent_test executable.
Step #4 make cov   // Runs test, Copies all files to Coverage folder, generates report.

So, we just need to open the Coverage/coverage/index.html to view the analysis report. Final file will look something like this.

by subho at January 16, 2015 10:38 AM

January 15, 2015

Sayan Chowdhury

Fedora 21 Release Party, Bangalore

The Fedora Project announced the release of Fedora 21 on December 09, 2014. To celebrate the release a Fedora 21 release party was organized at Red Hat, Bangalore with the help of Archit and Humble.

The event was scheduled to start at 10AM but people started coming in from 9:30AM itself. Around 40 people turned up among them a good number were college students.

The release party finally started at 10:30AM by Archit who gave an introduction of Fedora. Then, rtnpro gave a talk on what's new in Fedora 21 release and discussed on the Fedora.Next project. He was followed by Neependra Khare who spoke on Project Atomic and Docker.

Then we gathered down and celebrated the release of Fedora 21 by cutting a cake. After the celebration, I started off with explaining the various teams in Fedora and how to approach to contact the teams. I gave them an overview/demo of the wiki pages, mailing list and IRC. I briefly explained about the The final talk was given by Sinny on Basic RPM packaging. The talk covered the basic aspects of RPM packaging and how to create a RPM package of your own.

At 1:30PM, there was an Open House session where everybody participated actively sharing their views, queries and experiences. Fedora 21 LiveCDs were distributed among the attendees.

Thanks to all the organizers for organizing an awesome Fedora 21 release party. Looking forward to be part of other Fedora events in future.

January 15, 2015 12:00 PM

January 13, 2015

Soumya Kanti Chakraborty


FSCONS has become an integral part of Mozilla Sweden Community. The reason being that we started the revived journey of the local community last year at FSCONS. You can read my last year’s blog here. This year during planning for the event it was decided that we will increase our footprint and try to have more than just a booth. Two of our talks got accepted for FSCONS 2014.

Focus Areas/Objectives

  • Increase Mozilla presence in Nordics with a Mozilla booth.
  • Discuss about current l10n activities in Mozilla Sweden and how to increase our base. Try to involve more people to contribute to l10n.
  • Try to recruit new Mozillians and contributors for Mozilla Sweden Community.
  • Showcase Firefox OS and its various devices. Try and make the booth as a marketing podium for Firefox OS devices.
  • Organize a Swedish Mozilla community get together to discuss about pitfalls and road ahead.

Event Takeaways

  • With bit of last minute budget constraints we were still able to pull Åke, Martin and myself travelling to Gothenburg for the event. Thanks to all the folks to help in adjusting other needs which finally kept us within the planned budget for the event.
  • Me and Åke spoke about “Webmaker, Connected Learning and Libraries” during Sunday morning. We were in the main keynote hall and had a full on attendance during the whole session. The talk went well with us explaining about how Digital Literacy and Connected learning acts as a epicenter of present day knowledge map. There was lot of questions asked and hopefully we lived up to the expectation in answering them.
  • The next talk same day was about the journey of Mozilla community starting from last FSCONS. Oliver was the co speaker with me. Sole purpose of having such a session was to make the attendees aware of Mozilla Sweden community, attract more contributors, share the hiccups, success metrics of our journey and all in all to be more visible across communities in Sweden. The talk went full house as well :)
  • The conference mainly runs for two days, and we were fortunate to have our talks on 2nd day. Why I say fortunate because on 1st day normally people are packed with enthusiasm, willingness to know/learn and are very patient and curious about everything they perceive, but 2nd day is lousy (and Sunday). So our booth was super busy on Saturday (1st day) with full on questions, answers and feedback. The 2nd day we had our sessions (talks) which went full house (lucky !) and after that our booth again got lot of attention due to post session feedback, questions and getting along. So when other booths were doing so-so on 2nd day we kept the fire burning :)
  • Our community talk went right on the spot, we got 4-5 queries about how to contribute and where to start looking for things which interests them in Mozilla. We took no time to respond and provided them with all needed details. (4-5 people coming forward to contribute is big thing for us, considering we are a small active community).
  • 2nd day we also had a community meeting to discuss the roles, the task list and things to do for future in l10n for Sweden. Åke, Martin, me and Oliver joined in and we really had an effective l10n meetup.
  • I spoke with the Language team of the Gothenburg University (FSCONS Venue) and they promised to help us in securing more l10n contributions for Mozilla in the days ahead.
  • This time we had multiple flame devices and all flashed to the latest Firefox OS. They were a crowd puller specially as Flame is not so common here in Sweden, few people who have Firefox OS phones got it from Ebay (ZTE Open). Crowd coming to our booth were curious and took a lot of time playing with the device and asking a list of similar questions. They were super excited to see Flame and Firefox OS advancing the charts so fast.


FSCONS this year was seemingly much more successful than last year. We tried to fulfill all the goals and agenda metrics we set for the event and were very happy to complete it so satisfactorily. Thanks to Åke. Martin and Oliver to lend a big hand in the whole event, without whom it would have not been so worthwhile this year.

We will keep coming to FSCONS to sort of mark the community anniversary and increase the community presence in Nordics.

Below are photo sets –

by Chakraborty SoumyaKanti at January 13, 2015 12:41 AM

January 09, 2015


Using Gcov and Lcov to generate Test Coverage Stats for Cppagent

In my last post we generated Code coverage statistics for a sample c++. In this post i will be using gcov & lcov to generate similar code coverage for tests in cppagent. To use gcov we first need to compile the source files with --coverage flag. Our sample c++ program was a single file so it was easy to compile, but for cppagent they use makefiles to build the project. Hence, i started with the Makefile looking for the build instructions.

If my previous posts i discussed the steps for building the agent_test executable, which starts by running make command in test folder. So i started tracing the build steps from the Makefile in test folder. Since we run make without any parameters, the default target is going to be executed.

The first few lines of the file were as below.

# Default target executed when no arguments are given to make.

default_target: all

.PHONY : default_target

These lines specifies that the default_target for this build is all. On moving down the file we see the rules for all.

# The main all target

all: cmake_check_build_system

cd /home/subho/work/github/cppagent_new/cppagent && $(CMAKE_COMMAND) -E cmake_progress_start /home/subho/work/github/cppagent_new/cppagent/CMakeFiles /home/subho/work/github/cppagent_new/cppagent/test/CMakeFiles/progress.marks

cd /home/subho/work/github/cppagent_new/cppagent && $(MAKE) -f CMakeFiles/Makefile2 test/all

$(CMAKE_COMMAND) -E cmake_progress_start /home/subho/work/github/cppagent_new/cppagent/CMakeFiles 0

.PHONY : all

So here in the line

cd /home/subho/work/github/cppagent_new/cppagent && $(MAKE) -f CMakeFiles/Makefile2 test/all

We can see Makefile2 is invoked with target test/all.

In Makefile2 towards the end of the file we can see the test/all target build instructions as,

# Directory level rules for directory test

# Convenience name for "all" pass in the directory.

test/all: test/CMakeFiles/agent_test.dir/all

.PHONY : test/all

The rule says to run the commands defined under target test/CMakeFiles/agent_test.dir/all. These commands are:


$(MAKE) -f test/CMakeFiles/agent_test.dir/build.make test/CMakeFiles/agent_test.dir/depend

$(MAKE) -f test/CMakeFiles/agent_test.dir/build.make test/CMakeFiles/agent_test.dir/build

$(CMAKE_COMMAND) -E cmake_progress_report /home/subho/work/github/cppagent_new/cppagent/CMakeFiles 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58

@echo "Built target agent_test"

.PHONY : test/CMakeFiles/agent_test.dir/all

The first two lines run the build.make file with target ‘test/CMakeFiles/agent_test.dir/depend‘ and ‘test/CMakeFiles/agent_test.dir/build‘ . The build.make contains all the compile instructions for each of the c++ files. This file is in ‘test/CMakeFiles/agent_test.dir’ folder along with flag.make , link.txt etc files. The  flag.make file contains all the compile flags and the ‘link.txt‘ contains the libraries flag needed by linker. On adding the --coverage flag to these files we can make the c++ source files compile with gcov linked hence .gcno files are generated when the make command is run.

After that we need to run the agent_test as usual. This will create the data files .gcda files. After that we need to gather the .gcda and .gcno files together and run the lcov and genhtml commands and then the html output will be obtained.

Click to view slideshow.

by subho at January 09, 2015 05:35 PM

December 27, 2014

Sayan Chowdhury

Migrate a running process into tmux

Being a regular tmux user. Migrating a running process into tmux using reptyr comes handy.

reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

The package available in Fedora/Ubuntu repositories.

% sudo yum install -y reptyr        # For Fedora users
% sudo apt-get install -y reptyr    # For Ubuntu users

The steps to migrate a process is

  • Sending the current foreground job to the background using CTRL-Z.
  • List all the background jobs using `jobs -l`. This will get you the PID
% jobs -l
[1]  + 16189 suspended  vim foobar.rst

Here the PID is 16189

  • Start a new tmux or screen session. I will be using tmux
% tmux
  • Reattach the background process using
% reptyr 16189

If this error appears

Unable to attach to pid 16189: Operation not permitted
The kernel denied permission while attaching

Then type in the following command as root.

% echo 0 > /proc/sys/kernel/yama/ptrace_scope

These are commands are compatible with screen also

December 27, 2014 12:00 PM

December 26, 2014

Soumya Kanti Chakraborty


Last few months for me has been thoroughly hectic. In college life “Stack theory in Data Structures” was a must learn and it came as an exceptionally handy utility in my whole professional career as a developer.

Now implementing the stack theory concept “Last in Fast Out (LIFO)” while writing my blogs I will start from the recent activities and will roll back to where my last post ended.

I am still struggling with my time management skills, specially as I’m a procrastinator. Nevertheless will keep on writing my blog :)

by Chakraborty SoumyaKanti at December 26, 2014 03:42 AM

December 23, 2014

Souradeep De

Native apps using web technologies with Node-Webkit

node-webkit allows you to design and implement desktop applications using web technologies. Everything runs on the client side, and is backed up by node.js. It offers the perfect combo to build native apps using node.js + HTML.

node-webkit is an app runtime based on Chromium and node.js. You can write native apps in HTML and JavaScript with node-webkit. It also lets you call Node.js modules directly from the DOM and enables a new way of writing native applications with all Web technologies.

It’s created and developed in the Intel Open Source Technology Center.

node-webkit can be downloaded from here.

Setting the node-webkit command:

alias nw='path/to/node-webkit/executable'

node-webkit also  comes with the browser restrictions removed, so it perfectly adapts to native app development.

App creation:

A package.json and an index.html are the minimum requirements to create an app. An example package.json would look like:


Packaging the app is even simpler. We only need to create a zip archive, and change its extension to “.nw”.  This archive can then be exectued with the node-webkit command.

Or, the app can also be provided as a stand-alone one, users don’t need  to download node-webkit separately to run the app, making it easy for all kinds of end users. More information here.

Filed under: General Tagged: node-webkit

by desouradeep at December 23, 2014 02:40 PM