Planet dgplug

May 25, 2015

Shakthi Kannan

Haskell Functions

[Published in Open Source For You (OSFY) magazine, August 2014 edition.]

This second article in the series on Haskell explores a few functions.

Consider the function sumInt to compute the sum of two integers. It is defined as:

sumInt :: Int -> Int -> Int
sumInt x y = x + y

The first line is the type signature where the function name, arguments and return types are separated using a double colon (::). The arguments and the return types are separated by the symbol (->). Thus, the above type signature tells us that the sum function takes two arguments of type Int and returns an Int. Note that the function names must always begin with the letters of the alphabet in lower case. The names are usually written in CamelCase style.

You can create a Sum.hs Haskell source file using your favourite text editor, and load the file on to the Glasgow Haskell Compiler interpreter (GHCi) using the following code:

$ ghci
GHCi, version 7.6.3:  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.

Prelude> :l Sum.hs
[1 of 1] Compiling Main             ( Sum.hs, interpreted )
Ok, modules loaded: Main.

*Main> :t sumInt
sumInt :: Int -> Int -> Int

*Main> sumInt 2 3

If we check the type of sumInt with arguments, we get:

*Main> :t sumInt 2 3
sumInt 2 3 :: Int

*Main> :t sumInt 2
sumInt 2 :: Int -> Int

The value of sumInt 2 3 is an Int as defined in the type signature. We can also partially apply the function sumInt with one argument and its return type will be Int -> Int. In other words, sumInt 2 takes an integer and will return an integer with 2 added to it.

Every function in Haskell takes only one argument. So, we can think of the sumInt function as one that takes an argument and returns a function that takes another argument and computes their sum. This return function can be defined as a sumTwoInt function that adds a 2 to an Int using the sumInt function, as shown below:

sumTwoInt :: Int -> Int
sumTwoInt x = sumInt 2 x

The ‘=’ sign in Haskell signifies a definition and not a variable assignment as seen in imperative programming languages. We can thus omit the ‘x’ on either side and the code becomes even more concise:

sumTwoInt :: Int -> Int
sumTwoInt = sumInt 2

By loading Sum.hs again in the GHCi prompt, we get the following:

*Main> :l Sum.hs
[1 of 1] Compiling Main             ( Sum.hs, interpreted )
Ok, modules loaded: Main.

*Main> :t sumTwoInt
sumTwoInt :: Int -> Int

*Main> sumTwoInt 3

Let us look at some examples of functions that operate on lists. Consider list ‘a’ which is defined as [1, 2, 3, 4, 5] (a list of integers) in the Sum.hs file (re-load the file in GHCi before trying the list functions).

a :: [Int]
a = [1, 2, 3, 4, 5]

The head function returns the first element of a list:

*Main> head a

*Main> :t head
head :: [a] -> a

The tail function returns everything except the first element from a list:

*Main> tail a

*Main> :t tail
tail :: [a] -> [a]

The last function returns the last element of a list:

*Main> last a

*Main> :t last
last :: [a] -> a

The init function returns everything except the last element of a list:

*Main> init a

*Main> :t init
init :: [a] -> [a]

The length function returns the length of a list:

*Main> length a

*Main> :t length
length :: [a] -> Int

The take function picks the first ‘n’ elements from a list:

*Main> take 3 a

*Main> :t take
take :: Int -> [a] -> [a]

The drop function drops ‘n’ elements from the beginning of a list, and returns the rest:

*Main> drop 3 a

*Main> :t drop
drop :: Int -> [a] -> [a]

The zip function takes two lists and creates a new list of tuples with the respective pairs from each list. For example:

*Main> let b = ["one", "two", "three", "four", "five"]

*Main> zip a b

*Main> :t zip
zip :: [a] -> [b] -> [(a, b)]

The let expression defines the value of ‘b’ in the GHCi prompt. You can also define it in a way that’s similar to the definition of the list ‘a’ in the source file.

The lines function takes input text and splits it at newlines:

*Main> let sentence = "First\nSecond\nThird\nFourth\nFifth"

*Main> lines sentence

*Main> :t lines
lines :: String -> [String]

The words function takes input text and splits it on white space:

*Main> words "hello world"

*Main> :t words
words :: String -> [String]

The map function takes a function and a list and applies the function to every element in the list:

*Main> map sumTwoInt a

*Main> :t map
map :: (a -> b) -> [a] -> [b]

The first argument to map is a function which is enclosed within parenthesis in the type signature (a -> b). This function takes an input of type ‘a’ and returns an element of type ‘b’. Thus, when operating over a list [a], it returns a list of type [b].

Recursion provides a means of looping in functional programming languages. The factorial of a number, for example, can be computed in Haskell, using the following code:

factorial :: Int -> Int
factorial 0 = 1
factorial n = n * factorial (n-1)

The definition of factorial with different input use cases is called as pattern matching on the function. On running the above example with GHCi, you get:

*Main> factorial 0
*Main> factorial 1
*Main> factorial 2
*Main> factorial 3
*Main> factorial 4
*Main> factorial 5

Functions operating on lists can also be called recursively. To compute the sum of a list of integers, you can write the sumList function as:

sumList :: [Int] -> Int
sumList [] = 0
sumList (x:xs) = x + sumList xs

The notation *(x:xs) represents a list, where ‘x’ is the first element in the list, and ‘xs’ is the rest of the list. On running sumList with GHCi, you get the following:

*Main> sumList []
*Main> sumList [1,2,3]

Sometimes, you will need a temporary function for a computation, which you will not need to use elsewhere. You can then write an anonymous function. A function to increment an input value can be defined as:

*Main> (\x -> x + 1) 3

Such functions are called as Lambda functions, and the ‘\’ represents the notation for the symbol Lambda. Another example is given below:

*Main> map (\x -> x * x) [1, 2, 3, 4, 5]

It is a good practice to write the type signature of the function first when composing programs, and then write the body of the function. Haskell is a functional programming language, and understanding the use of functions is very important.

May 25, 2015 10:00 AM

May 19, 2015

Kushal Das

CentOS Cloud SIG update

For the last few months we are working on the Cloud Special Interest Group in the CentOS project. The goal of this SIG is to provide the basic guidelines and infrastructure required of FOSS cloud infrastructure projects so that we can build and maintain the packages inside official CentOS repositories.

We have regular meetings at 1500 UTC in every Thursday on #centos-devel IRC channel. You can find the last week’s meeting log here. RDO (Openstack), Opennebula and Eucalyptus were the first few projects to come ahead, and participate in forming the SIG. We also have a good number of overlap with the Fedora Cloud SIG.

RDO is almost ready to do a formal release of Kilo on CentOS 7. The packages are in testing phase. Opennebula team has started the process to get the required packages built on CBS.

If you want to help feel free to join in the #centos-devel channel, and give us a shout. We do need more helping hands to package, and maintain various FOSS Cloud platforms.

There are also two GSoC projects under CentOS which are related to the Cloud SIG. The first one is “Cloud in a box”, and the second one is “Lightweight Cloud Instance Contextualization Tool”. Rich Bowen, and Haikel Guemar are the respective mentors for the projects.

by Kushal Das at May 19, 2015 03:03 PM

May 10, 2015

Shakthi Kannan

Introduction to Haskell

[Published in Open Source For You (OSFY) magazine, July 2014 edition.]

Haskell, a free and open source programming language, is the outcome of 20 years of research. It has all the advantages of functional programming and an intuitive syntax based on mathematical notation. This article flags off a series in which we will explore Haskell at length.

Haskell is a statically typed, general purpose programming language. Code written in Haskell can be compiled and also used with an interpreter. The static typing helps detect plenty of compile time bugs. The type system in Haskell is very powerful and can automatically infer types. Functions are treated as first-class citizens and you can pass them around as arguments. It is a pure functional language and employs lazy evaluation. It also supports procedural and strict evaluation similar to other programming paradigms.

Haskell code is known for its brevity and is very concise. The latest language standard is Haskell 2010. The language supports many extensions, and has been gaining wide-spread interest in the industry due to its capability to run algorithms on multi-core systems. It has support for concurrency because of the use of software transactional memory. Haskell allows you to quickly create prototypes with its platform and tools. Hoogle and Hayoo API search engines are available to query and browse the list of Haskell packages and libraries. The entire set of Haskell packages are available in Hackage.

The Haskell Platform contains all the software required to get you started on it. On GNU/Linux, you can use your distribution package manager to install the same. On Fedora, for example, you can use the following command:

# yum install haskell-platform

On Ubuntu, you can use the following:

# apt-get install haskell-platform

On Windows, you can download and run HaskellPlatform-2013.2.0.0-setup.exe from the Haskell platform web site and follow the instructions for installation.

For Mac OS X, download either the 32-bit or 64-bit .pkg file, and click on either to proceed with the installation.

The most popular Haskell interpreter is the Glasgow Haskell Compiler (GHC). To use its interpreter, you can run ghci from the command prompt on your system:

$ ghci
GHCi, version 7.6.3:  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.

The Prelude prompt indicates that the basic Haskell library modules have been imported for your use.

To exit from GHCi, type :quit in the Prelude prompt:

Prelude> :quit
Leaving GHCi.

The basic data types used in Haskell are discussed below.

A Char data type is for a Unicode character. You can view the type using the command :type at the GHCi prompt:

Prelude> :type 's'
's' :: Char

The ’::’ symbol is used to separate the expression on the left with the data type on the right.

A Bool data type represents a logical value of either True or False.

Prelude> :type True
True :: Bool

Signed numbers with a fixed width are represented by the Int data type. The Integer type is used for signed numbers that do not have a fixed width.

Prelude> 5

The Double and Float types are used to represent decimal values. The Double type has better precision for floating point numbers:

Prelude> 3.0

The basic data types can be combined to form composite types. There are two widely used composite types in Haskell, namely, lists and tuples. A list is a collection of elements of the same data type enclosed within square parenthesis. A list of characters is shown below:

Prelude> :type ['a', 'b', 'c']
['a', 'b', 'c'] :: [Char]

The static typing in Haskell produces errors during compile or load time (in GHCi) when you mix data types inside a list. For example:

Prelude> ['a', 1, 2]

    No instance for (Num Char) arising from the literal `1'
    Possible fix: add an instance declaration for (Num Char)
    In the expression: 1
    In the expression: ['a', 1, 2]
    In an equation for `it': it = ['a', 1, 2]

You can have a list of lists as long as they contain the same data type:

Prelude> :type [['a'], ['b', 'c']]
[['a'], ['b', 'c']] :: [[Char]]

A tuple is an ordered list of elements with a fixed size, enclosed within parenthesis, where each element can be of a different data type. For example:

Prelude> :type ('t', True)
('t', True) :: (Char, Bool)

Note that the tuple with type (Char, Bool) is different from the tuple with type (Bool, Char).

Prelude> :t (False, 'f')
(False, 'f') :: (Bool, Char)

Haskell originates from the theory of Lambda calculus, which was developed by Alonzo Church to formally study mathematics. In 1958, John McCarthy created Lisp, that relates programming with Lambda calculus. Robin Milner created a functional programming language called ML (meta language) for automated proofs of mathematical theorems in 1970. During the 1980s, there were a number of lazy functional programming languages scattered across the research community. Miranda was a very popular proprietary programming language released by Research Software Ltd in 1985.

A need arose to unify the different research developments, for which a committee was formed and the first version of the standard was released in 1990. It was called Haskell 1.0, after the mathematician and logician, Haskell Brooks Curry. Subsequently, there were four revisions made - 1.1, 1.2, 1.3 and 1.4. In 1997, the Haskell 98 report was released. In 2009, the Haskell 2010 standard was published and is the latest standard as on date. It has Foreign Function Interface (FFI) bindings to interface with other programming languages. The Hugs interpreter is useful for teaching, while the Glasgow Haskell Compiler (GHC) is very popular. The paper by John Hughes on “Why Functional Programming matters?” in as excellent paper to read. A number of software companies in the industry have begun to use Haskell in production systems.

We shall be exploring more features, constructs and use of the language in future articles.


[1] Haskell.

[2] Haskell 2010.

[3] Hoogle.

[4] Hayoo.

[5] Hackage.

[6] Haskell Platform.

[7] Glasgow Haskell Compiler.

[8] Alonzo Church.

[9] John McCarthy.

[10] Lisp.

[11] Robin Milner.

[12] Miranda.

[13] Haskell 1.0.

[14] Haskell Brooks Curry.

[15] Hugs.

[16] “Why Functional Programming matters?”

[17] Why functional programming? Why Haskell?.

May 10, 2015 04:00 PM

April 26, 2015

Chandan Kumar

[Event Report] April Python Pune Meetup 26th April, 2015

After a successful Python Sprint in March, 2015, we hosted April Python Pune meetup, 2015 on 26th April, 2015 at Zlemma Analytics Pvt. Ltd., Baner Road, Pune (India). This meetup focused on python packaging workshop, writing simple automation scripts using fabric and interacting with MySQL and sqlite database using python.

About 35 people attended this meetup. Most of them were professionals.

The meetup started at 10:30 A.M. with package your python code workshop where we explained :

  • Why should i package my python code
  • Tools Required for packaging
  • Get familiar with pip, virtualenv and setuptools
  • Python Project Structure
  • Create a dummy project and package it
  • Create a source distribution
  • Register yourself on Pypi and upload package.

For that i had created a dummy project myls <> and explained the above steps.

After a small break, Suprith presented a talk on "My first automation script using Fabric". He started by introducing what is fabric, how it is different from other automation tools like Ansible, salt, puppet and chef and showed how to use fabric to write simple automation script.

The last talk was presented by Tejas on how to interact with MySQL and sqlite database using python and how to add crawled data from a website using beautifulsoap and crawler and store in the database. Here is the source code of the above demo.

By 01:30 P.M., Suraj had demonstrated his cool python application speedup, a simple python program to speedup internet speed on your LAN and Hardik demonstrated about his final year project on how to scan new virus on a windows machine by reading system calls over .exe files and analyzing those calls using data mining.

Finally, this awesome meetup came to an end at 02:00 P.M.

Below is the upcoming plan of 3 month meetup i.e from May, 2015 to July, 2015.

  • Workshop/Talk on Flask, Data Analytics, Automation using selinium and Robots and Security
  • Hackathon - Craft your idea into a real program and contribute to your favorite upstream OpenSource Python project.

Thanks to Ajit Nipunge and Zlemma for providing the venue arrangements, speakers and attendees who made the meetup successful.

by Chandan Kumar at April 26, 2015 05:23 PM

April 24, 2015

Sanjiban Bairagya

My experiences at 2015

Last year I really wanted to attend but couldn’t because my train tickets were not confirmed by the time it started. So, I had made up my mind right then, that I will definitely attend it the next year by all means. So, this year I applied and was accepted as a speaker too. But tragedy struck again, when some college issues clashed with the date of my flight, so I had to reschedule it once, and then one more time due to one more clash, after which I could finally reach Amritapuri in the evening of Day 1 of conference when it was already over. So, yes I was sad as I had missed the first day, but that also meant that I should make the best of the second day for sure.

The second day of conference started with some great south-Indian breakfast, where I met up with some of the important people like Shantanu, Sinny, Pradeepto and others. The first talk for that day was supposed to start from around 10:00 am, and I was the one to speak in it. I spoke on the ‘Interactive Tours’ feature that I had implemented in Marble last year, and it was pretty well received (I hope). My talk was followed by the rest of the other talks of the day, which were all pretty awesome, and very interesting as well. I got to meet Devaja, Rishab, Pinak, and the rest of the speakers during the talks, and I loved to interact with each one of them.




After a couple of talks after mine, I was asked by Sinny Kumari whether I would like to volunteer in a qml workshop which was being held in one of the labs. I didn’t wanna miss this opportunity, so I said “yes”, and went to the lab with her. The workshop started in a few moments, after everyone settled down. It was Shantanu who was explaining most of the stuff, using his computer screen as a projection, with me, Sinny, Pradeepto and Pinak helping the attendees in their computers in case they needed help or had some query. It was a very productive session, amazingly led by Shantanu, and I loved every moment that was spent in it.

Well, the day ended with an awesome lunch, and a few more talks for the day. We were often approached by students from their college, asking us about our experiences with KDE, and how to start contributing. I answered them with my personal experience with Marble, and it went really well, with some good feedback from both parties. People were very enthusiastic and I loved spending time and exchanging information with them. After the end of all the talks, we went out for some sightseeing, rode a boat :P , saw some awesome views from a high building rooftop, went to the beach, had lots and lots of fun.



and then finally came back, where we were invited to a lab, where each of the speakers and the students were saying their last viewpoints about the conference, what they liked about it, what could be improved. We told them about our awesome experience and that we would love to come back here again. Speakers who were still in college were asked how they keep open-source contributions alive in their respective institutions, I told them about our the GNU/Linux users’ group of my college and the events that are organized by it. Pradeepto told us some really interesting and funny stories about KDE, which were really both fun and motivational to listen to.



After all was said and done, all the speakers were given a certificate of appreciation, along with some books, and we walked back to the guesthouse. We had our very final celebration at night after returning back, with an awesome chicken and beer party in one of the rooms, till 1:30 am in the night. I think it would be fair to say that this was the best day of my life, and I am very glad to have bunked Durgapur for my flight the previous day, otherwise I would have missed out on all of these amazing moments, which have now turned into memories for life. Thanks to KDE, and especially to Dennis, without whom I wouldn’t even be in this position right now. Thanks to the organizers of and everyone else associated with it, for making my day special. I would love to come back to the next and the next and the next. Thanks a lot! :)

by sanjibanbairagya at April 24, 2015 05:29 AM

April 19, 2015

Arpita Roy

And i know a little more now :)

It feels terrible when you can’t write in for a long time. Well , the reason brings a wide smile too.
I am not a very busy person nor do i do a lot of work ,  but yes all i do is spend more than a little time on studying and learning about things and terms sparsely known to me.
The few days i spent got me my fruits :) Yes , i still need to code ” A LOT ” and i mean the caps.
Speaking about C , Now i am familiar with lots of things – Arrays , Pointers , Functions , Data Structure ,File Handling etc. Still need to dive into more and more details.
I , now know how to create a function , when to call it by name and when to call it by address and things about which i shall discuss after knowing more about them.Python is fine too :)
Happiness is when you know how well to manage things , which few days back seemed havoc ( you may know , if you are an avid reader of my blogs ;) )
There is a lot to learn yet and i invest most of time doing it.. but there has a bad side to everything and to me – COLLEGE ( i so hate it )
Assignments , Internals and what not get to my nerves and sometimes i feel like killing myself. Few days simply pass doing all these :( Yes , it is SAD.

P.S – All i am waiting for is , this semester to end and go on a long holiday :) To me , that is happiness +1 :)

by Arpita Roy at April 19, 2015 05:09 PM

April 18, 2015

Souradeep De

Disabling an nvidia graphics card with bbswitch on Fedora

bbswitch is a kernel module which automatically detects the required ACPI calls for two kinds of Optimus laptops. It has been verified to work with “real” Optimus and “legacy” Optimus laptops.

kernel-headers and kernel-devel are needed to install bbswitch with dkms. Now, installing with dkms ensures that bbswitch is going to survive future kernel upgrades.

sudo yum install kernel-headers kernel-devel dkms

Download the latest stable version of bbswitch, extract, and install with dkms.

tar -xf bbswitch-0.5.tar.gz
cd bbswitch-0.5
sudo make -f Makefile.dkms

The nouveau driver needs to be unloaded before bbswitch can be loaded. If nouveau driver is being used somewhere, then the way around is to blacklist nouveau, and rebuild initramfs.

A simple lsmod can reveal if nouveau driver is being used:

lsmod | grep nouveau

You can also try unloading nouveau:

sudo modprobe -r nouveau

If the above fails, we are heading over to blacklisting nouveau and rebuilding initramfs.

su -c "echo 'blacklist nouveau' >> /etc/modprobe.d/disable-nouveau.conf"
sudo mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
sudo dracut --omit-drivers nouveau /boot/initramfs-$(uname -r).img $(uname -r)

If all goes well, reboot.

Nouveau driver can now be easily removed and bbswitch can be loaded once the system has restarted.

sudo modprobe -r nouveau
sudo modprobe bbswitch

Once, bbswitch is loaded, disabling and enabling the graphics card is just a walk in the park:

sudo tee /proc/acpi/bbswitch <<< OFF    # disable
sudo tee /proc/acpi/bbswitch <<< ON     # enable

Verifying the status of the card is as easy as:

cat /proc/acpi/bbswitch

Filed under: General, Tweaks Tagged: bbswitch, nvidia

by desouradeep at April 18, 2015 08:54 PM

March 30, 2015

Chandan Kumar

[Event Report] Python Pune Sprint 28th Mar, 2015

From last three months, we had conducted workshops and talks starting from python 101 to Django, Ansible, data analysis and a lot more. On the 4th Month, i.e. on 28th Mar, 2015, we hosted an sprint a.k.a Python Pune Sprint at Red Hat, Pune (India).

The main objective was to help people who have basic knowledge of python and git to come ahead and learn how to find issues which will help them to contribute to any Upstream python projects. I had selected some of the trending python projects with issues on a gist.

40 people attended the sprint in which most of them are previous meetup attendees.

About 10:30 A.M., the sprint started with a formal introduction about the sprint where i talked about:

  • How to use IRC/Mailing List
  • Tools :- IDE, Version Control System, Programming Language, Bug/issue Tracker
  • Choose a project on which you want to contribute
  • Since you are new to project, Check README or Contributing files
  • Follow the docs, setup the Development Environment
  • Check the issues and search for labels or tags with these keywords: {EasyFix, Contributor Friendly, Beginners, Easy, Low Hanging Fruits, Difficulty/beginners, junior Jobs, Gnome Love etc.}
  • Pick an issue and Read the Description
  • Try to reproduce the issue based on Description
  • Not able to understand the issue, Use Google or Ping someone on IRC or send a mail to Mailing list of Related Project
  • Got the issue, then make a fix in a new branch
  • Create a Patch or Pull Request and send it to Upstream
  • Follow up on the Patch/Pull Request until it gets merged
  • Finally, you had made an awesome contribution to an Upstream Project

Anurag explained about his project Crange and helped two contributor to get their first patch merged.

By 12:00 A.M., many of them selected issues on which they want to hack and started working on it. I and Praveen helped other to get some easy fixes for them.

By the end of the day, we had got more than 15 Pull requests for respective projects: {Fedora Infrastructure, Salt, Lohit Font, OpenStack-Horizon, Junction, Crange, werkzeug, click, sympy, tctoolkit, newcoder etc.}. Here <> is the list of all pull requests sent by the attendess.

Finally the sprint ended by 04:00 P.M. with a feedback, we are looking forward to host such type of sprints in the Upcoming meetups. Thanks to Red Hat for the venue and accommodation and all the mentors/attendees for making the first sprint successful.

Lessons Learned from this sprint:

  • Should ask attendees to come up with project of their interest.
  • Need to gather Upstream Contributors so that they can provide helping hands to attendees.
  • Need a tool which will list all the easy fixes of all the projects at one place :- OpenHatch gathers easy bugs but not for all the project
  • Needs a feature in github to query all the issues of as many projects with tags.

We hope to improve the sprint in upcoming meetups.

Some moments from the sprint:

by Chandan Kumar at March 30, 2015 07:11 PM

March 24, 2015

Kushal Das

Tunir, a simple CI with less pain

One of my job requirement is to keep testing the latest Fedora cloud images. We have a list of tests from Fedora QA team. But the biggest problem is that I don’t like doing these manually. I was looking for a way to run these automatically. We can do this by the normal CI systems, but there are two problems in that.

  • Most CI systems cannot handle cloud images, unless there is a real cloud running somewhere.
  • Maintaining the CI system & the cloud is a pain in my standard.

Tunir came out as a solution to these problems. It is a simple system, which can run predefined set of commands in a fresh cloud instance, or in a remote system. Btw, did I mention that you don’t need a cloud to run these cloud instances in your local system? This is possible thanks to the code from Mike Ruckman.

Each job in Tunir requires two files, jobname.json and jobname.txt. The json file contains the details of the Cloud image (if any), or the remote system details, ram required for the vm etc. The .txt file contains the shell commands to run in the system. For now it has two unique commands for Tunir. You can write @@ in front of any command to mark that this command will return non zero exit code. We also have a SLEEP NUMBER_OF_SECONDS option, we use it when we reboot the system, and want Tunir to wait before executing the next command.

Tunir has a stateless mode, I use that all the time :) In stateless mode, it will not save the results in any database. It will directly print the result in the terminal.

$ tunir --job fedora --stateless

Tunir uses redis to store some configuration information, like available ports. Remember to execute to fill the configuration with available ports.

You can install Tunir using pip, a review request is also up for Fedora. If you are on Fedora 21, you can just test with my package.

I am currently using unittest for the Cloud testcases, they are available at my github. You can use fedora.json and fedora.txt from the same repo to execute the tests. Example of tests running inside Tunir is below (I am using this in the Fedora Cloud tests).

curl -O
tar -xzvf tunirtests.tar.gz
python -m unittest tunirtests.cloudtests
sudo systemctl stop crond.service
@@ sudo systemctl disable crond.service
@@ sudo reboot
sudo python -m unittest tunirtests.cloudservice.TestServiceManipulation
@@ sudo reboot
sudo python -m unittest tunirtests.cloudservice.TestServiceAfter

UPDATE: Adding the output from Tunir for test mentioned above.

sudo ./tunir --job fedora --stateless
[sudo] password for kdas: 
Got port: 2229
cleaning and creating dirs...
Creating meta-data...
downloading new image...
Local downloads will be stored in /tmp/tmpZrnJsA.
Downloading file:///home/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 (158443520 bytes)
Succeeded at downloading Fedora-Cloud-Base-20141203-21.x86_64.qcow2
download: /boot/vmlinuz-3.17.4-301.fc21.x86_64 -> ./vmlinuz-3.17.4-301.fc21.x86_64
download: /boot/initramfs-3.17.4-301.fc21.x86_64.img -> ./initramfs-3.17.4-301.fc21.x86_64.img
/usr/bin/qemu-kvm -m 2048 -drive file=/tmp/tmpZrnJsA/Fedora-Cloud-Base-20141203-21.x86_64.qcow2,if=virtio -drive file=/tmp/tmpZrnJsA/seed.img,if=virtio -redir tcp:2229::22 -kernel /tmp/tmpZrnJsA/vmlinuz-3.17.4-301.fc21.x86_64 -initrd /tmp/tmpZrnJsA/initramfs-3.17.4-301.fc21.x86_64.img -append root=/dev/vda1 ro ds=nocloud-net -nographic
Successfully booted your local cloud image!
PID: 11880
Starting a stateless job.
Executing command: curl -O
Executing command: tar -xzvf tunirtests.tar.gz
Executing command: python -m unittest tunirtests.cloudtests
Executing command: sudo systemctl stop crond.service
Executing command: @@ sudo systemctl disable crond.service
Executing command: @@ sudo reboot
Sleeping for 30.
Executing command: sudo python -m unittest tunirtests.cloudservice.TestServiceManipulation
Executing command: @@ sudo reboot
Sleeping for 30.
Executing command: sudo python -m unittest tunirtests.cloudservice.TestServiceAfter

Job status: True

command: curl -O
status: True

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  8019  100  8019    0     0   4222      0  0:00:01  0:00:01 --:--:--  4224

command: tar -xzvf tunirtests.tar.gz
status: True


command: python -m unittest tunirtests.cloudtests
status: True

Ran 4 tests in 0.036s

OK (skipped=1, unexpected successes=2)

command: sudo systemctl stop crond.service
status: True

command: @@ sudo systemctl disable crond.service
status: True

Removed symlink /etc/systemd/system/

command: @@ sudo reboot
status: True

command: sudo python -m unittest tunirtests.cloudservice.TestServiceManipulation
status: True

Ran 1 test in 0.282s


command: sudo python -m unittest tunirtests.cloudservice.TestServiceAfter
status: True

Ran 1 test in 0.070s


by Kushal Das at March 24, 2015 06:46 AM

March 18, 2015

Arpita Roy

Yet, a lot to learn.

Going on a break is becoming a regular bad habit of mine :P  The culprit is Time.
After coming back to my as usual boring life of college, i was busy with my internals. Went fine.
Coming back to my ” learning ” part.. i have so so much to study yet.
Speaking about a little things i came across in C, Arrays, Numbers Systems, brief description on Pointers and yes, FUNCTIONS – felt like a master as i knew it’s details ( thanks to python :) ) There is a lot to learn about functions too.
Trying to work on a few codes that are not getting into my head.
A day passes and everyday at the end i suppose myself to have grasped a few essential things.

P.S – All that i could discuss for today. Need to fast up my pace in the world of learning :)

by Arpita Roy at March 18, 2015 02:58 PM

March 09, 2015

Samikshan Bairagya

spartakus: Using sparse to have semantic checks for kernel ABI breakages

I have been working on this project I have named spartakus that deals with kernel ABI checks through semantic processing of the kernel source code. I have made the source code available on Github some time back and this does deserve a blog post.

spartakus is a tool that can be used to generate checksums for exported kernel symbols through semantic processing of the source code using sparse. These checksums would constitute the basis for kernel ABI checks, with changes in checksums meaning a change in the kABI.

spartakus (which is currently a WIP) is forked from sparse and has been modified to fit the requirements of semantic processing of the kernel source for kernel ABI checks. This adds a new binary ‘check_kabi‘ upon compilation, which can be used during the linux kernel build process to generate the checksums for all the exported symbols. These checksums are stored in Module.symvers file which is generated during the build process if the variable CONFIG_MODVERSIONS is set in the .config file.

What purpose does spartakus serve?
In an earlier post I had spoken a bit on exported symbols constituting the kernel ABI and how its stability can be kept track of through CRC checksums. genksyms has been the tool that had been doing this job of generating the checksums for exported symbols that constitute the kernel ABI so far, but it has several problems/limitations that make the job of developers difficult wrt maintaining the stability of the kernel ABI. Some of these limitations as determined by me are mentioned below:

1.genksyms generates different checksums for structs/unions for declarations which are semantically similar. For example, for the following 2 declarations for the struct ‘list_head':

struct list_head {
    struct list_head *next, *prev;

struct list_head {
    struct list_head *next;
    struct list_head *prev;

both declarations are essentially the same and should not result in a change in kABI wrt ‘list_head’. Sparse treats these 2 declarations as semantically the same and different checksums are not generated.

2. For variable declarations with just unsigned/signed specification with no type specified, sparse considers the type as int by default.

‘unsigned foo’ is converted to ‘unsigned int foo’ and then processed to get the corresponding checksum. On the other hand, genksyms would generate different checksums for 2 different definitions which are semantically the same.

sparse is licensed under the MIT license which is GPL compatible. The added files as mentioned above are under a GPLv2 license since I have used code from genksyms which is licensed under GPLv2.

Source Code
Development on spartakus is in progress and the corresponding source code is hosted on Github.

Lastly, I know there will be at least who will wonder why the name ‘spartakus’. Well I do not totally remember, but it had something to do with sparse and Spartacus. Does sound cool though?

by Samikshan Bairagya at March 09, 2015 01:55 PM

February 26, 2015


Understanding RapidJson

With new technologies softwares need to evolve and adapt. My new task is to make cppagent generate output in Json (JavaScript Object Notation) format. Last week i spent sometime to try out different libraries and finally settled on using Rapidjson. Rapidjson is a json manipulation library  for c++ which is fast, simple and has compatibility with different c++ compilers in different platforms. In this post we will be looking at example codes to generate, parse and manipulate json data. For people who want to use this library i would highly recommend them to play with and understand the example codes first.

First we will write a simple program to write a sample json as below (the same simplewriter.cpp as in example) :

    "hello" : "world" ,
    "t" : true ,
    "f" : false ,
    "i" : 123 ,
    "pi" : 3.1416 ,
    "a": [

To generate a Json output you need:

  • a StringBuffer object, a buffer object to write the Json output.
  • Writer object to write Json to the buffer. Here i have used PrettyWriter object to write human-readable and properly indented json output.
  • functions StartObject/EndObject to start and close a json object parenthesis “{” and  “}” respectively.
  • functions StartArray/EndArray to start and end a json array object i.e “[” and “]“.
  • functions String(), Uint(), Bool(), Null() , Double()  are called on writer object to write string, unsigned integer, boolean, null, floating point numbers respectively.
#include "rapidjson/stringbuffer.h"
#include "rapidjson/prettywriter.h"
#include <iostream>

using namespace rapidjson;
using namespace std;

template <typename Writer>
void display(Writer& writer );

int main() {
 StringBuffer s; 
 PrettyWriter<StringBuffer> writer(s);
 cout << s.GetString() << endl;   // GetString() stringify the Json

template <typename Writer>
void display(Writer& writer){
 writer.StartObject();  // write "{"
 writer.String("hello"); // write string "hello"
 writer.Bool(true);   // write boolean value true
 writer.Null();        // write null
 writer.Uint(123);     // write unsigned integer value
 writer.Double(3.1416); // write floating point numbers
 writer.StartArray();  // write "["
 for (unsigned i = 0; i < 4; i++)
 writer.EndArray();   // End Array "]"
 writer.EndObject();  // end Object "}"

Next we will manipulate the Json document and change the value for key “Hello” to “C++” ,

To manipulate:

  • first you need to parse your json data into a Document object.
  • Next you may use a Value reference to the value of the desired node/key or you can directly access them as doc_object[‘key’] .
  • Finally you need to call the Accept method passing the Writer object to write the document to the StringBuffer object.

Below function changes the keywords for “hello” , “t”, “f” to “c++” , false , true respectively.

template <typename Document>
void changeDom(Document& d){
// any of methods shown below can be used to change the document
Value& node = d["hello"];  // using a reference
node.SetString("c++"); // call SetString() on the reference
d["f"] = true; // access directly and change
d["t"].SetBool(false); // best way

Now to put it all together:

Before Manupulation
     "hello": "world",
     "t": true,
     "f": false,
     "n": null,
     "i": 123,
     "pi": 3.1416,
     "a": [
After Manupulation
     "hello": "c++",
     "t": false,
     "f": true,
     "n": null,
     "i": 123,
     "pi": 3.1416,
     "a": [

The final code to display the above output:

#include "rapidjson/stringbuffer.h"
#include "rapidjson/prettywriter.h"
#include "rapidjson/document.h"
#include <iostream>

using namespace rapidjson;
using namespace std;

template <typename Writer> 
void display(Writer& writer);

template <typename Document>
void changeDom(Document& d);

int main() {
 StringBuffer s;
 Document d;
 PrettyWriter<StringBuffer> writer(s);
 cout << "Before Manupulation\n" << s.GetString() << endl ;
 s.Clear();   // clear the buffer to prepare for a new json document
 writer.Reset(s);  // resetting writer for a fresh json doc
 d.Accept(writer); // writing parsed document to buffer
 cout << "After Manupulation\n" << s.GetString() << endl;

template <typename Document>
void changeDom(Document& d){
Value& node = d["hello"];
d["f"] = true;

template <typename Writer>
void display(Writer& writer){
 for (unsigned i = 0; i < 4; i++)

by subho at February 26, 2015 12:30 PM

January 16, 2015


Finally integrating Gcov and Lcov tool into Cppagent build process

This is most probably my final task on Implementing Code Coverage Analysis for Mtconnect Cppagent. In my last post i showed you the how the executable files are generated using Makefiles. In Cppagent the Makefiles are actually autogenerated by a cross-platform Makefile generator tool CMakeTo integrate Gcov and Lcov into the build system we actually need to start from the very beginning of the process which is cmake. The CMake commands are written in CmakeLists.txt files. A minimal cmake file could look something like this. Here we have the test_srcs as the source file and agent_test as the executable.

cmake_minimum_required (VERSION 2.6)


set(test_srcs menu.cpp)

add_executable(agent_test ${test_srcs})

Now lets expand and understand the CMakeLists.txt for cppagent.


This sets the path where cmake should look for files when files or include_directories command is used. The set command is used to set values to the variables. You can print all the available variable out using the following code.

get_cmake_property(_variableNames VARIABLES)
foreach (_variableName ${_variableNames})
    message(STATUS &quot;${_variableName}=${${_variableName}}&quot;)


Next section of the file:

 set(LibXML2_INCLUDE_DIRS ../win32/libxml2-2.9/include )
 set(bits 64)
 set(bits 32)
 file(GLOB LibXML2_LIBRARIES "../win32/libxml2-2.9/lib/libxml2_a_v120_${bits}.lib")
 file(GLOB LibXML2_DEBUG_LIBRARIES ../win32/libxml2-2.9/lib/libxml2d_a_v120_${bits}.lib)
 set(CPPUNIT_INCLUDE_DIR ../win32/cppunit-1.12.1/include)
 file(GLOB CPPUNIT_LIBRARY ../win32/cppunit-1.12.1/lib/cppunitd_v120_a.lib)

Here, we are checking the platform we are working on and accordingly the library variables are being set to the windows based libraries. We will discuss the file command later.

 set(LINUX_LIBRARIES pthread)

Next if the OS platform is Unix based then we execute the command uname as child-process and store the output in CMAKE_SYSTEM_NAME variable. If its a Linux environment., Linux  will be stored in the CMAKE_SYSTEM_NAME variable, hence,  we set the variable LINUX_LIBRARIES to pthread(which is the threading library for linux). Now we find something similar we did in our test CMakeLists.txt. The project command sets the project name, version etc. The next line stores the source file paths to a variable test_src

set( test_srcs file1 file2 ...)
Now we will discuss about the next few lines.
file(GLOB test_headers *.hpp ../agent/*.hpp)

The file command is used to manipulate the files. You can read, write, append files, also GLOB allows globbing of files which is used to generate a list of files matching the expression you give. So here wildcard expression is used to generate a list of all header files in the particular folder *.hpp.

include_directories(../lib ../agent .)

This command basically tells cmake to add the directories specified by it to its list of directories when looking for a file.

find_package(CppUnit REQUIRED)

This command looks for package and loads the settings from it. REQUIRED makes sure the External package is loaded properly else it must stop throwing an error.


add_definitions is where the additional compile time flags are added.

add_executable(agent_test ${test_srcs} ${test_headers})

This line generates an executable target for the project named agent_test and test_src and test_headers are its source and header files respectively. 

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES})

This line links the executable its libraries.

::Gcov & Lcov Integration::

Now that we know our CMake file well, lets make the necessary changes.

Step #1

Add two variables and set the appropriate compile and linking flags for gcov and lcov respectively.

set(GCOV_COMPILE_FLAGS &quot;-fprofile-arcs -ftest-coverage&quot;)
set(GCOV_LINK_FLAGS &quot;-lgcov&quot;)

Step #2

Split the source into two halves one being the unit test source files and the other being the cppagent source files. We are not interested in unit test files’ code coverage.

set( test_srcs test.cpp
set(agent_srcs ../agent/adapter.cpp 

Step #3

Like i told in Step 2 we are not interested in unit test source files. So here we just add the Gcov compile flags to only the cppagent source files. So .gcno files of only the agent source files are generated.


Step #4

Now we also know that for coverage analysis we need to link the “lgcov” library. Therefore, we do this in the following way.

target_link_libraries(agent_test ${LibXML2_LIBRARIES} ${CPPUNIT_LIBRARY} ${LINUX_LIBRARIES} ${GCOV_LINK_FLAGS}) 

Step #5

Since we love things to be automated. I added a target for the make command to automate the whole process of running test and copying the “.gcno” files and moving the “.gcda” files to a folder then running the lcov command to read the files and prepare a easily readable statistics and finally the genhtml command to generate the html output. add_custom_target allows you to add custom target for make(Here i added “cov” as the target name). COMMAND allows you to specify simple bash commands.

add_custom_target( cov
COMMAND [ -d Coverage ]&amp;&amp;rm -rf Coverage/||echo &quot;No folder&quot;
COMMAND mkdir Coverage
COMMAND agent_test
COMMAND cp CMakeFiles/agent_test.dir/__/agent/*.gcno Coverage/
COMMAND mv CMakeFiles/agent_test.dir/__/agent/*.gcda Coverage/
COMMAND cd Coverage&amp;&amp;lcov -t &quot;result&quot; -o -c -d .
COMMAND cd Coverage&amp;&amp;genhtml -o coverage
COMMENT &quot;Generated Coverage Report Successfully!&quot;


Now to build test and generate report.

Step #1 cmake .    // In project root which cppagent/
Step #2 cd test    // since we want to build only test
Step #3 make       // This will build the agent_test executable.
Step #4 make cov   // Runs test, Copies all files to Coverage folder, generates report.

So, we just need to open the Coverage/coverage/index.html to view the analysis report. Final file will look something like this.

by subho at January 16, 2015 10:38 AM

January 15, 2015

Sayan Chowdhury

Fedora 21 Release Party, Bangalore

The Fedora Project announced the release of Fedora 21 on December 09, 2014. To celebrate the release a Fedora 21 release party was organized at Red Hat, Bangalore with the help of Archit and Humble.

The event was scheduled to start at 10AM but people started coming in from 9:30AM itself. Around 40 people turned up among them a good number were college students.

The release party finally started at 10:30AM by Archit who gave an introduction of Fedora. Then, rtnpro gave a talk on what's new in Fedora 21 release and discussed on the Fedora.Next project. He was followed by Neependra Khare who spoke on Project Atomic and Docker.

Then we gathered down and celebrated the release of Fedora 21 by cutting a cake. After the celebration, I started off with explaining the various teams in Fedora and how to approach to contact the teams. I gave them an overview/demo of the wiki pages, mailing list and IRC. I briefly explained about the The final talk was given by Sinny on Basic RPM packaging. The talk covered the basic aspects of RPM packaging and how to create a RPM package of your own.

At 1:30PM, there was an Open House session where everybody participated actively sharing their views, queries and experiences. Fedora 21 LiveCDs were distributed among the attendees.

Thanks to all the organizers for organizing an awesome Fedora 21 release party. Looking forward to be part of other Fedora events in future.

January 15, 2015 12:00 PM

January 13, 2015

Soumya Kanti Chakraborty


FSCONS has become an integral part of Mozilla Sweden Community. The reason being that we started the revived journey of the local community last year at FSCONS. You can read my last year’s blog here. This year during planning for the event it was decided that we will increase our footprint and try to have more than just a booth. Two of our talks got accepted for FSCONS 2014.

Focus Areas/Objectives

  • Increase Mozilla presence in Nordics with a Mozilla booth.
  • Discuss about current l10n activities in Mozilla Sweden and how to increase our base. Try to involve more people to contribute to l10n.
  • Try to recruit new Mozillians and contributors for Mozilla Sweden Community.
  • Showcase Firefox OS and its various devices. Try and make the booth as a marketing podium for Firefox OS devices.
  • Organize a Swedish Mozilla community get together to discuss about pitfalls and road ahead.

Event Takeaways

  • With bit of last minute budget constraints we were still able to pull Åke, Martin and myself travelling to Gothenburg for the event. Thanks to all the folks to help in adjusting other needs which finally kept us within the planned budget for the event.
  • Me and Åke spoke about “Webmaker, Connected Learning and Libraries” during Sunday morning. We were in the main keynote hall and had a full on attendance during the whole session. The talk went well with us explaining about how Digital Literacy and Connected learning acts as a epicenter of present day knowledge map. There was lot of questions asked and hopefully we lived up to the expectation in answering them.
  • The next talk same day was about the journey of Mozilla community starting from last FSCONS. Oliver was the co speaker with me. Sole purpose of having such a session was to make the attendees aware of Mozilla Sweden community, attract more contributors, share the hiccups, success metrics of our journey and all in all to be more visible across communities in Sweden. The talk went full house as well :)
  • The conference mainly runs for two days, and we were fortunate to have our talks on 2nd day. Why I say fortunate because on 1st day normally people are packed with enthusiasm, willingness to know/learn and are very patient and curious about everything they perceive, but 2nd day is lousy (and Sunday). So our booth was super busy on Saturday (1st day) with full on questions, answers and feedback. The 2nd day we had our sessions (talks) which went full house (lucky !) and after that our booth again got lot of attention due to post session feedback, questions and getting along. So when other booths were doing so-so on 2nd day we kept the fire burning :)
  • Our community talk went right on the spot, we got 4-5 queries about how to contribute and where to start looking for things which interests them in Mozilla. We took no time to respond and provided them with all needed details. (4-5 people coming forward to contribute is big thing for us, considering we are a small active community).
  • 2nd day we also had a community meeting to discuss the roles, the task list and things to do for future in l10n for Sweden. Åke, Martin, me and Oliver joined in and we really had an effective l10n meetup.
  • I spoke with the Language team of the Gothenburg University (FSCONS Venue) and they promised to help us in securing more l10n contributions for Mozilla in the days ahead.
  • This time we had multiple flame devices and all flashed to the latest Firefox OS. They were a crowd puller specially as Flame is not so common here in Sweden, few people who have Firefox OS phones got it from Ebay (ZTE Open). Crowd coming to our booth were curious and took a lot of time playing with the device and asking a list of similar questions. They were super excited to see Flame and Firefox OS advancing the charts so fast.


FSCONS this year was seemingly much more successful than last year. We tried to fulfill all the goals and agenda metrics we set for the event and were very happy to complete it so satisfactorily. Thanks to Åke. Martin and Oliver to lend a big hand in the whole event, without whom it would have not been so worthwhile this year.

We will keep coming to FSCONS to sort of mark the community anniversary and increase the community presence in Nordics.

Below are photo sets –

by Chakraborty SoumyaKanti at January 13, 2015 12:41 AM