Planet dgplug

February 09, 2016

Kushal Das

Tunir 0.13 is released and one year of development

Tunir 0.13 release is out. I already have a koji build in place for F23. There are two major feature updates in this release.

AWS support

Yes, we now support testing on AWS EC2. You will have to provide your access tokens in the job configuration along with the required AMI details. Tunir will boot up the instance, will run the tests, and then destroy the instance. There is documentation explaining the configuration details.

CentOS images

We now also support CentOS cloud images, and Vagrant images in Tunir. You can run the same tests on the CentOS images based on your need.

One year of development

I have started Tunir on Jan 12 2015, means it got more than one year of development history. At the beginning it was just a project to help me out with Fedora Cloud image testing. But it grew to a point where it is being used as the Autocloud backend to test Fedora Cloud, and Vagrant images. We will soon start testing the Fedora AMI(s) too using the same. Within this one year, there were total 7 contributors to the project. In total we are around 1k lines of Python code. I am personally using Tunir for various other projects too. One funny thing from the code commits timings, no commit on Sundays :)

by Kushal Das at February 09, 2016 12:31 PM

February 08, 2016

Kushal Das

No discriminatory tariffs for data services in India

Finally we have won. The Telecom Regulatory Authority of India has issued a press release some time ago telling that no one can charge different prices for different services on Internet. The fight was on in an epic scale, one side spent more than 100million in advertisements, and lobbying. The community fought back in the form of #SaveTheInternet, and it worked.

In case you are wondering what is this about, you can start reading the first post from Mahesh Murthy, and the reply to the response of Facebook. Trust me, it will be an amazing reading. There is also this excellent report from Lauren Smiley. There are also few other amazing replies from TRAI to Facebook about same.

by Kushal Das at February 08, 2016 10:55 AM

February 06, 2016

Farhaan Bukhsh

Moz @ Mysore

“January, a month well spent. “, this was my thought after I came back from NIE Mysore. NIE was conducting a webmakers session and Abhiram asked me and Abraar to conduct the session there , we have been conducting these session in various colleges we even went to IIT Madras for the same.

For Mysore we were accompanied by Amjad , this was his first session and man! he was nervous. We tried to calm him down but once he went on stage then there was no looking back. Coming back to the Mysore experience , we had to board a train to Mysore and we got really good co-passengers and three of hours journey turned into a really amazing experience.

NIEWe reached Mysore and Guru was there to receive us, we directly went to the college and started preparing for the event, there were about 100 students. They were a bit nervous and shy in the beginning but we had a nice ice breaking session and that made them really comfortable, apart from normal web development workshop , we thought that we should take this session to a different level. After the session we stayed back and tried to various queries and we had a really productive discussion which went from IOT to TV series to FOSS contribution.

The next day we continued the session and covered the rest of the things. I got a small part for covering Javascript and we also gave a walk through of ECMA script. All over the session pretty amazing , awesome organizer , heart warming hospitality  and crazy Windows users aka the participants , we actually inspired them to shun Windows and start off with LInux.

Some details have been omitted for preserving public interest. ;)

It was a session worth enjoying , thanks a lot NIE.


by fardroid23 at February 06, 2016 04:35 PM

February 05, 2016

Arpita Roy

A (not-so-simple) C program.

The excitement level is so up that i couldn’t stop myself from writing this post.
Well , may not be a very big deal for everyone but it sure is a big thing to me. Yes , right now i feel like i am on cloud 9 :)
Few days back , in one of our labs , we were supposed to write a C program and the question given was quite complicated for one to first understand.. It took me almost two hours,  sitting and thinking on what should i do..
I knew every logic that the question had.. but i had to struggle enough to write the code.

I will mention the question along with an example so that it is clear :)

The question “just” said to round an “integer” not any specific but can be rounded upto any roundup digit ( which means , we take the roundup digit ). Simple isn’t it ? :D
Well , it should include all the rounding off rules :
1. If it is to be rounded till m places , we note the ‘n’th’ digit , and check the ‘n+1’th’ digit
a. if the n+1’th digit is greater than 5, the n’th digit increases by 1
b. if the n+1’th digit is less than 5, the n’th digit remains the same.
2. If  it is to be rounded till m place, we note the note ‘n’th’ digit AND if the ‘n+1’th’ digit is 5
a. then the n’th digit , if even, then remains the same.
b. then the n’th digit , if odd , increase the n’th digit by 1.

Example –
1 . if the number is – 23476 : should be rounded to m= 4 places : so, n= 7 and n+1= 6
then the output should be – 2348
2. if the number is – 23474 : should be rounded to m= 4 places : so, n= 7 and n+1= 4
then the output should be – 2347
3. if the number is – 23465 : should be rounded to m = 4 places : so, n= 7 and n+1= 5
then the output should be – 2346
4. if the number is – 23475 : should be rounded to m = 4 places : so, n= 6 and n+1= 5
then the output should be – 2348

Lots of logic. Unluckily, none of the students could do it :( including me :P many students tried google , but even google failed to answer them.
I felt bad, not because i couldn’t write the code in the class but because inspite of knowing the entire logic i couldn’t frame the code.
The day ended, i came back to the room and just bumped on the bed.. Stretched my legs , glued my eyes to the screen , took a pen and paper and started writing whatever i had in mind. My code was a little messy , it gave errors at few points. Finally , i asked for help from sayan da , and may be i couldn’t properly explain the question to him.

The only thing that i missed doing was:

y=num;
while(y!=0)
{
nobit++;
y=y/10;
}

and the rest everything was perfect. I was okay with the my code :) thanks to kartick for making me help with this little thing.
Here is the code:

#include<stdio.h>
main()
{
int num, abc, bit, bit1, i, y, nobit=0, div=1, round;
printf(“please enter a number: “);
scanf(“%d”, &num);//enter number
y=num;
while(y!=0)
{
nobit++;
y=y/10;
}
printf(“Please enter the the roundup digit: “);
scanf(“%d”, &round);//enter rounding digit
for(i=nobit; i>ask;i–)
div*=10;
abc=num/div;
bit=abc%10;
bit1=(num/(div/10))%10
/*      printf(“%d\n”,abc);
printf(“%d\n”,bit);
printf(“%d\n”, bit1);*/
if(bit1<5)
printf(“%d”, abc);
if(bit1>5)
printf(“%d”, abc+1);
if(bit1==5)ell
{
if(bit%2==1)
printf(“%d”,abc+1);
else
printf(“%d”,abc);

}
}
Felt really good after seeing my entire logic working right in front of my eyes.
This went a little long but was worth writing :)

P.S – Dry run really helps :)


by Arpita Roy at February 05, 2016 06:27 PM

Trishna Guha

Why is Testing important?

Why is writing tests for your code as important as writing code for your product?

We write Tests to check/test:

  • If the functionalities of the code you wrote or added are working properly.
  • If the new code you added is not breaking the old existing one.

So it’s better to catch the bugs of your code before it reaches to others ;) .

I was working on fedora-bodhi for last few days and I chose an issue from there to work on. Though it looked simple enough but it really needed some tricks to solve the bug. The interesting thing is that it was not clearly visible that we have to apply some tricks to solve the issue. And I came to know that I had to apply some tricks after I wrote tests for the code I added.

So the solution for the issue has the following conditions:

  1. When an update of a build in bodhi is ‘pending’ and it requests for ‘testing’ and if that request is revoked, the status of that build should be set to ‘unpushed’.
  2. When an update of a build in bodhi is ‘testing’ and it requests for ‘stable’ and if that request is revoked, the status of that build should be set to ‘testing’.

I pushed a patch for it accordingly and received positive comment from threebean :) . Then threebean and lmacken told me to write test for it, because “Sometimes what we see is not the truth!!” . If you look at the patch it really seemed to work. But when I wrote test for the code I added, the test was failing. As pingou suggested I tried printing some debugging statements to make sure if the test is exercising the code I added. Hence I came to know that the test is not going through the code :|. After struggling for a day I catched that my code was conflicting here which also has action ‘revoke’.

Then I modified my code to solve that conflict and received positive review from pingou ;) .

Now I strongly feel why writing test is as important as writing code and testing makes you write better code as well :) . The earlier patch was not behaving the way it was expected to behave. I wouldn’t realize that my earlier patch would fall if I didn’t write tests for it and if that patch would be merged the bug would be realized after it reached to masses. So isn’t it a smart practice to catch bug of your code by writing tests before it reaches to masses? ;)

The PR can be found here which is finally merged. Thanks to pingou for continuously reviewing my patches and thanks to threebean and lmacken for encouraging and helping me to write tests :) .

 


by Trishna Guha at February 05, 2016 01:50 PM

February 03, 2016

Tosin Damilare James Animashaun

Ubuntu Africa Calls

This post is about the Ubuntu Africa community's meeting that held in their IRC channel on Wednesday the 26th of January, 2016. You could read the raw logs of the meeting here, but my guess is that reading logs like that might not be the most enjoyable task for your eyes. So I have tried to make this blog post a more pleasant-for-the-eyes version of it. :)



Ubuntu Africa is a community of Ubuntu (and Linux) users within the shores of the African continent. There are several local communities (LoCos) specific to countries, but this group is limited only to the continent. It aims to bring together these LoCos.

We believe that coming together to work together will bring about a more rapid growth. Owing to our disparate skills individually, we need communities like this that will present us with people who are skilled in the areas that we are lacking.

Looking through the directory of Ubuntu LoCos, I find it a little surprising that there are no efforts for an active community in my home country, Nigeria. Many people I have met scarcely even know about IRC around here. More so with the coming of Slack.

My recent foray into the African developer community had informed me about this event a few days earlier.

Minutes before the meeting, some users (myself inclusive) had arrived the IRC channel and engaged in discussions while patiently waiting for the scheduled meeting time.

The meeting began at the set time. And it was, as it was customary, chaired by a user whom had been elected in the last meeting Naeil Zoueidi (@Na3iL)[2] from Tunisia. The other person who seemed to have some good footing in the house was Miles Sharp (@Kilos)[2] from South Africa.

Attendees at the meeting came from different African nations including Cameroon (with the highest population and mostly first-timers), Tunisia, South Africa, Ghana, Zimbabwe, RDC and Nigeria.

The primary language used during discussion was English. However, Kilos pointed out that if any user struggled with the English language, they could use their own language (most likely French) in the hope that someone would translate for them.


The #ubuntu-africa IRC channel has a bot called QA that logs meetings and does a few other tasks. At the beginning of meetings attending users are expected to introduce themselves using the format:

QA: I am [first name] [last name] - [country]

OR

QA: I am [first name] [last name] from [country]

When a user does this correctly, the bot would respond to the user confirming receipt of the information. This is doable anytime during the meeting. So if a user comes in late, that's the first thing they'd be expected to do.

I have noticed that the bot performs no parsing whatsoever on this given data and just logs it as is. Therefore, it's in a user's best interest to adhere (strictly) to the format given above.


Before the meeting, there had been an agenda page -- I believe this was open for suggestions on the mailing list -- iterating the issues to be discussed at the meeting. And this was strictly followed during the meeting.


Meeting Excerpts

The Ubuntu Africa community is currently an unofficial group. It would need more activity to gain the official recognition.

Firstly, it is important to note that all members have a mandate to try and convince users from African Linux local communities/user-groups to join the Ubuntu Africa community whilst still being active in their respective communities, as this community is country-agnostic and aims to bring us all under one umbrella.

The meeting had a number of first-timers (myself included) in attendance. The new members were recognised. And we learnt of the Ubuntu local community in Cameroon and their mailing list.

Users were encouraged to get word out about the fledgling community through their social hangouts (Facebook, Twitter and the rest...). More suggestions came in on how to get the publicizing done. Users suggested YouTube videos, Twitter posts (#ubuntu-africa) and blog posts (like the one you are reading now).

The agreement was that Pieter Engelbrecht (@chesedo) from South Africa wouldput up the blog post. create a tweet about the community using the hashtag "#ubuntu-africa". And just in case you want to help get the word out too, here's the link to share: http://ubuntu-africa.info


The chair of the meeting raised a point about making some improvements to the current website of the community -- adding a blog section to it. This brought our attention to the designer of the current portal, Raoul Snyman (@superfly) from South Africa and due thanks were accorded by users.

@superfly revealed that the current site was implemented using the static blog generator, Nikola, meaning it already comes with a blogging feature, although it hadn't yet been activated, since no one had been available to blog on the site. @Na3iL offered to help with the blogging, and in the end it was agreed that the task would be handled collaboratively by @superfly (technically) and @Na3iL (literarily).

Stephen Mawutor Donkor from Ghana (@mawutor) made one final suggestion in this regard about getting involved in Ubuntu-Lab projects for schools as a way to gain some media attention.. Their suggestion was commended.


Now to something more technical. Another memeber of the group Cameroonian member @ongolaBoy talked about the issues surrounding approval of our mirror. He stated that the status of submission had gone from "pending" to "unofficial", which is a good thing.

They shared the URL to the Launchpad page of the mirror: https://launchpad.net/ubuntu/+mirror/miroir.cm.auf.org-archive

Everyone was impressed with this news.


The last topic tended toward "social" again, and it was kick-started by the chair with the question, "Any new coming events?"

Kilos mentioned the upcoming release of Ubuntu 16.04, which we all anticipate for April. The Tunisian folks (@Na3iL and @elacheche_anis) got talking about some SysAdmin workshop still in the works.

@Na3iL then made the suggestion for an Ubucon. "Why don't we plan for an Ubuncon?" He asked. He followed that nicely enough with definition of the term Ubucon:

"An Ubucon is generally an informal, lightly structured gathering of Ubunteros. There are also other meetings and UbuntuConferences"


At this juncture, things were beginning to wound up as the chairperson of the meeting, @Na3iL moved the motion for "Elect chairperson for next meeting". This was about the quickest thing to be concluded as there was an immediate unanimous re-election of @Na3iL.

The final motion for the day was for the selection of a date for the next meeting. Someone suggested that this be treated in the mailing list as a way to get people to use the list. Thus, everyone agreed that the date for next meeting be decided in there.

I think the name of the user who came in at this time deserves a mention. It was @d3r1ck -- we didn't get their full identity.



[1] The word "nuxers" is a portmanteau of the words "Linux" and "users".
[2] The nicks don't need the preceeding '@'. It's used here for emphasis.

by Tosin Damilare James Animashaun at February 03, 2016 01:10 PM

February 01, 2016

Tosin Damilare James Animashaun

On the Stellar Train!

Two days ago, I attended Stellar.org's fireside chat at Idea Hub in Lagos. It was an enlightening event; one that may well be my gateway into the world of open source in the year.

No doubt this will present a somewhat steep learning curve knowing that my background in finance (and FinTech) is not much to write home about. The experience and payoffs will of course be worth the while and I am expecting more developers to join the train as the platform is still very welcoming at this point.

Stellar is a fairly new financial technology service that aims to be the de facto way in which we move money around.

From Stellar's FAQ page,

"Stellar is a decentralized protocol you can use to send and receive money in any pair of currencies. So for example, the protocol supports sending money in dollars and having it arrive in pesos."


I particularly like the analogy used here by the Executive Director of Stellar, Joyce Kim,

"The Stellar platform functions a lot like email whose underlying protocol is SMTP. Before SMTP, you could only email people that were in the same company, network or ISP as you"


She says it would be a lot like being able to send mails across different mail providers.

Most of the details of the workings of Stellar are still very unclear to me, but a few things I've been able to make sense of include:

  • Stellar operates on a decentralized network of servers

  • Stellar maintains an open digital ledger of transactions. This data is synchronized on all servers.

  • Financial institutions subscribed to the Stellar service (called gateways, and to act as "trust" houses) can offer Stellar credits.

  • Stellar credits are used to resolve currency pairing issues.

  • A Consensus is how Stellar verifies the credibility of a transaction before allowing it to pull through.

Stellar.org provides an explain page to describe these concepts. However, I find this article very explanatory.


I have joined Stellar's public Slack team. And so far, I've been received with warm arms. I encourage many budding programmers lookig to get their feet wet with open source projects to join this platform and make some contributions. The mantra for me is "even if it fails..." I think this says enough already.

At the moment, the project I'm looking to contribute to is the Python library for interfacing with the Stellar core.

This project is still in beta form and could readily use some help. It was suggested to me by Scott Fleckenstein (@nullstyle), the first Engineer at Stellar. At the event, Scott tried to work us through the technical nitty-gritties: innerworkings, stack and how-to-get-started with contributing to the Stellar project.

These are just some of the things I have been able to wrap my mind around thus far. I am optimistic that once I tie myself in the loop of working on the code and working with the leads I get from time to time, it would all clear out in due course.

by Tosin Damilare James Animashaun at February 01, 2016 10:00 AM

January 30, 2016

Arpita Roy

The Saga Continues !

Hello again :)
Well , the break was on purpose.
Please care to deal with the fact that i am back to college. The same old torture has begun. Can’t explain how badly it took away from me all the beautiful time i spent with myself and my laptop.
I have yet not hugged the subjects because i hate them. The two subjects i am basically focusing on is – Automata and Java.
So far , i have dealt only with classes and objects but yes , wrote a lot of codes to get my concept clear
** acting like a boss in my class ;)
Along with C and python , java is also trying to fit it’s pieces in me.. Well , i am a little confused and going through a shortage of time but still will have to manage the trio.
ah ! it’s going to be a lot of fun.
I am slowly getting isolated from the friend circle of mine. They barely talk to me now.
** Kushal da , you were right :) i experienced it a little late :)
I am trying with all my level to do my best. I wish i get along properly with all my plans.
Shall get back very soon.


by Arpita Roy at January 30, 2016 01:38 PM

January 27, 2016

Shakthi Kannan

Introduction to Haskell - Web Programming

[Published in Open Source For You (OSFY) magazine, June 2015 edition.]

In this final article in the Haskell series, we shall explore how to use it for web programming.

Scotty is a web framework written in Haskell, which is similar to Ruby’s Sinatra. You can install it on Ubuntu using the following commands:

$ sudo apt-get install cabal-install
$ cabal update
$ cabal install scotty

Let us write a simple `Hello, World!’ program using the Scotty framework:

-- hello-world.hs

{-# LANGUAGE OverloadedStrings #-}

import Web.Scotty

main :: IO ()
main = scotty 3000 $ do
  get "/" $ do
    html "Hello, World!"

You can compile and start the server from the terminal using the following command:

$ runghc hello-world.hs 
Setting phasers to stun... (port 3000) (ctrl-c to quit)

The service will run on port 3000, and you can open localhost:3000 in a browser to see the `Hello, World!’ text. You can then stop the service by pressing Control-c in the terminal. You can also use Curl to make a query to the server. Install and test it on Ubuntu as shown below:

$ sudo apt-get install curl

$ curl localhost:3000
Hello, World!

You can identify the user client that made the HTTP request to the server by returning the “User-Agent” header value as illustrated in the following example:

-- request-header.hs

{-# LANGUAGE OverloadedStrings #-}

import Web.Scotty

main :: IO ()
main = scotty 3000 $ do
  get "/agent" $ do
    agent <- header "User-Agent"
    maybe (raise "User-Agent header not found!") text agent

You can execute the above code in a terminal using the following command:

$ runghc request-header.hs  
Setting phasers to stun... (port 3000) (ctrl-c to quit)

If you open the URL localhost:3000/agent in the browser, it returns the following User-Agent information on Ubuntu 14.10 Mozilla/5.0 (X11; Linux x8664) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/41.0.2272.76 Chrome/41.0.2272.76 Safari/537.36. The Curl version is returned for the same URL request as shown below:

$ curl localhost:3000/agent -v

 * Hostname was NOT found in DNS cache
 *   Trying 127.0.0.1...
 * Connected to localhost (127.0.0.1) port 3000 (#0)
 > GET /agent HTTP/1.1
 > User-Agent: curl/7.37.1
 > Host: localhost:3000
 > Accept: */*
 > 
 < HTTP/1.1 200 OK
 < Transfer-Encoding: chunked
 < Date: Wed, 29 Apr 2015 07:46:21 GMT
 * Server Warp/3.0.12.1 is not blacklisted
 < Server: Warp/3.0.12.1
 < Content-Type: text/plain; charset=utf-8
 < 
 * Connection #0 to host localhost left intact

 curl/7.37.1

You can also return different content types (HTML, text, JSON) based on the request. For example:

-- content-type.hs

{-# LANGUAGE OverloadedStrings #-}

import Web.Scotty as W
import Data.Monoid
import Data.Text
import Data.Aeson

main :: IO ()
main = scotty 3000 $ do
  get "/hello" $ do
    html $ mconcat ["<h1>", "Hello, World!", "</h1>"]

  get "/hello.txt" $ do
    text "Hello, World!"

  get "/hello.json" $ do
    W.json $ object ["text" .= ("Hello, World!" :: Text)]

You can start the above server in a terminal as follows:

$ runghc content-type.hs 
Setting phasers to stun... (port 3000) (ctrl-c to quit)

You can then open the three URLs listed above in a browser to see the different output. The respective outputs when used with Curl are shown below:

$ curl localhost:3000/hello
<h1>Hello, World!</h1>

$ curl localhost:3000/hello.txt
Hello, World!

$ curl localhost:3000/hello.json
{"text":"Hello, World!"}

You can also pass parameters in the URL when you make a request. The param function can be used to retrieve the parameters as indicated below:

-- params.hs

{-# LANGUAGE OverloadedStrings #-}

import Web.Scotty
import Data.Monoid

main :: IO ()
main = scotty 3000 $ do
  get "/user" $ do
    name <- param "name"
    html $ mconcat ["<h1>Hello ", name, "</h1>"]

You can start the above server using the runghc command:

$ runghc params.hs 
Setting phasers to stun... (port 3000) (ctrl-c to quit)

You can now try the URL requests with and without parameters. The observed outputs are shown below:

$ curl localhost:3000/user
<h1>500 Internal Server Error</h1>Param: name not found!

$ curl localhost:3000/user?name=Shakthi
<h1>Hello Shakthi</h1>

The Hspec testing framework can be used for integration testing the web application. Install the required dependencies as shown below:

$ cabal install happy hspec hspec-wai hspec-wai-json

The content type example has been updated to use Hspec, as illustrated below:

-- content-type-spec.hs

{-# LANGUAGE OverloadedStrings, QuasiQuotes #-}
module Main (main) where

import Data.Monoid
import Data.Text

import           Network.Wai (Application)
import qualified Web.Scotty as W
import           Data.Aeson (object, (.=))

import           Test.Hspec
import           Test.Hspec.Wai
import           Test.Hspec.Wai.JSON

main :: IO ()
main = hspec spec

app :: IO Application
app = W.scottyApp $ do
  W.get "/hello.txt" $ do
    W.text "Hello, World!"

  W.get "/hello" $ do
    W.html $ mconcat ["<h1>", "Hello, World!", "</h1>"]

  W.get "/hello.json" $ do
    W.json $ object ["text" .= ("Hello, World!" :: Text)]

spec :: Spec
spec = with app $ do
  describe "GET /" $ do
    it "responds with text" $ do
      get "/hello.txt" `shouldRespondWith` "Hello, World!"

    it "responds with HTML" $ do
      get "/hello" `shouldRespondWith` "<h1>Hello, World!</h1>"

    it "responds with JSON" $ do
      get "/hello.json" `shouldRespondWith` [json|{text: "Hello, World!"}|]

You can compile the above code as shown below:

$ ghc --make content-type-spec.hs     
Linking content-type-spec ...

The following output is observed when you run the above built test executable:

$ ./content-type-spec 

GET /
  responds with text
  responds with HTML
  responds with JSON

Finished in 0.0010 seconds
3 examples, 0 failures

Please refer to the hspec-wai webpage at https://github.com/hspec/hspec-wai for more information.

Template support is available through many Haskell packages. The use of the blaze-html package is demonstrated below. Install the package first using the following command:

$ cabal install blaze-html

Consider a simple web page with a header and three unordered lists. Using blaze-html, the template can be written in Haskell DSL as follows:

-- template.hs

{-# LANGUAGE OverloadedStrings #-}

import Web.Scotty as W
import Text.Blaze.Html5
import Text.Blaze.Html.Renderer.Text

main :: IO ()
main = scotty 3000 $ do
  get "/" $ do
    W.html . renderHtml $ do
      h1 "Haskell list"
      ul $ do
        li "http://haskell.org"
        li "http://learnyouahaskell.com/"
        li "http://book.realworldhaskell.org/"

You can compile the above code using GHC:

$ ghc --make template.hs 
Linking template ...

You can then execute the built executable, which starts the server as shown below:

$ ./template 
Setting phasers to stun... (port 3000) (ctrl-c to quit)

Opening a browser with URL localhost:3000 will render the expected HTML file. You can also verify the resultant HTML output using the Curl command as shown below:

$ curl localhost:3000
<h1>Haskell list</h1><ul><li>http://haskell.org</li><li>http://learnyouahaskell.com/</li><li>http://book.realworldhaskell.org/</li></ul>

It is good to separate the views from the actual application code. You can move the template content to a separate file as shown below:

-- Haskell.hs

{-# LANGUAGE OverloadedStrings #-}

module Haskell where

import Text.Blaze.Html5

render :: Html
render = do
  html $ do
    body $ do
      h1 "Haskell list"
      ul $ do
        li "http://haskell.org"
        li "http://learnyouahaskell.com/"
        li "http://book.realworldhaskell.org/"

The main application code is now simplified as shown below:

-- template-file.hs

{-# LANGUAGE OverloadedStrings #-}

import qualified Haskell
import Web.Scotty as W
import Text.Blaze.Html
import Text.Blaze.Html.Renderer.Text

blaze :: Text.Blaze.Html.Html -> ActionM ()
blaze = W.html . renderHtml

main :: IO ()
main = scotty 3000 $ do
  get "/" $ do
    blaze Haskell.render

You need to place both the source files (Haskell.hs and template-file.hs) in the same top-level directory, and you can then compile the template-file.hs file that will also compile the dependency Haskell.hs source file as shown below:

$ ghc --make template-file.hs 

[1 of 2] Compiling Haskell          ( Haskell.hs, Haskell.o )
[2 of 2] Compiling Main             ( template-file.hs, template-file.o )
Linking template-file ...

You can now run the server as follows:

$ ./template-file 
Setting phasers to stun... (port 3000) (ctrl-c to quit)

Executing template-file produces the same output as in the case of the template.hs example.

$ curl localhost:3000
<html><body><h1>Haskell list</h1><ul><li>http://haskell.org</li><li>http://learnyouahaskell.com/</li><li>http://book.realworldhaskell.org/</li></ul></body></html>

You can refer the Scotty wiki page at https://github.com/scotty-web/scotty/wiki for more information.

The clay package is a CSS preprocessor similar to LESS and Sass. You can install it using the following Cabal command:

    $ cabal install clay

Let us consider a simple CSS example to generate a list of fonts to be used in the body section of a HTML page. The corresponding Clay Haskell embedded DSL looks like the following:

-- clay-simple.hs

{-# LANGUAGE OverloadedStrings #-}

import Clay

main :: IO ()
main = putCss exampleStylesheet

exampleStylesheet :: Css
exampleStylesheet = body ? fontFamily ["Baskerville", "Georgia", "Garamond", "Times"] [serif]

You can compile the above code as follows:

$ ghc --make clay-simple.hs 
[1 of 1] Compiling Main             ( clay-simple.hs, clay-simple.o )
Linking clay-simple ...

You can then execute clay-simple to generate the required CSS output as shown below:

$ ./clay-simple

body
{
  font-family : "Baskerville","Georgia","Garamond","Times", serif;
}

/* Generated with Clay, http://fvisser.nl/clay */

A more comprehensive example is shown below for the HTML pre tag:

-- clay-pre.hs

{-# LANGUAGE OverloadedStrings #-}

import Clay

main :: IO ()
main = putCss $
  pre ?
    do border dotted (pt 1) black
       whiteSpace (other "pre")
       fontSize (other "8pt")
       overflow (other "auto")
       padding (em 20) (em 0) (em 20) (em 0)

You can compile the above clay-pre.hs file as shown below:

$ ghc --make clay-pre.hs 
[1 of 1] Compiling Main             ( clay-pre.hs, clay-pre.o )
Linking clay-pre ...

Executing the above complied clay-pre binary produces the following output:

$ ./clay-pre 

pre
{
  border      : dotted 1pt rgb(0,0,0);
  white-space : pre;
  font-size   : 8pt;
  overflow    : auto;
  padding     : 20em 0em 20em 0em;
}

/* Generated with Clay, http://fvisser.nl/clay */

You can also add custom values using the Other type class or the fallback operator `-:’ to explicitly specify values. For example:

-- clay-custom.hs

{-# LANGUAGE OverloadedStrings #-}

import Clay

main :: IO ()
main = putCss $
  body ?
       do fontSize (other "11pt !important")
          "border" -: "0"

Compiling and executing the above code produces the following output:

$ ghc --make clay-custom.hs 
[1 of 1] Compiling Main             ( clay-custom.hs, clay-custom.o )
Linking clay-custom ...

$ ./clay-custom 

body
{
  font-size : 11pt !important;
  border    : 0;
}

/* Generated with Clay, http://fvisser.nl/clay */

You can explore more of clay from the official project homepage http://fvisser.nl/clay/.

A number of good books are available for further learning. I recommend the following books available online and in print:

  1. Bryan O’Sullivan, Don Stewart, and John Goerzen. (December 1, 2008). Real World Haskell (http://book.realworldhaskell.org/). O’Reilly.

  2. Miran Lipovaca. (April 21, 2011). Learn You a Haskell for Great Good! A Beginner’s Guide (http://learnyouahaskell.com/). No Starch Press.

The https://www.haskell.org website also has plenty of useful resources. You can also join the haskell-cafe@haskell.org and beginners@haskell.org mailing lists ( https://wiki.haskell.org/Mailing_lists ) for discussions. The folks in the #haskell channel on irc.freenode.net are also very helpful.

I hope you enjoyed learning Haskell through this series, as much as I did creating them. Please feel free to write to me (author at shakthimaan dot com) with any feedback or suggestions.

Happy Hacking!

January 27, 2016 10:30 PM

January 24, 2016

Farhaan Bukhsh

Crypto 101

I was not keen on cryptography before this incident, so I was working on this PR for pagure where I had to write a feature which give  the ability to local user to change the password. You can see the PR here. There are two ways to login to pagure FAS account if you want to use pagure online else if you are hosting it you can use local authentication. I learned a lot from this feature  for starting “how not to write authentication system” that is because the first PR I sent was all about the wrong practices thanks a lot to Pingou and Peter to correct and guide me on how to do it.

Peter pointed me out to this video which cleared a lot of my questions. So some of the basic rules for writing an auth. system is :

  1. Never ever store password
  2. Always adopt the latest encryption technique
  3. Use constant time function to compare password

The technique we used is called salting, this is a very beautiful technique. Before going into salting we try to look into what is hashing because hashing generates a junk string using an algorithm, salting is just a different layer over it which makes this hashing more random. Hashing is a way in which password is converted into a string which cannot be inverted. Practically once you convert a string into hash there is no way you can retrieve it but the drawback is comes when two same string produce same hash string. Now if that is possible then it can be brute forced or rainbow tables can be used to get the password.

Here salting comes into the picture because with salting the entropy of the strings generated is exponentially increased and hence your site a little more secure because even you are storing password in your database even you can’t decipher the password all you see is a junk string. That makes your site secure and impenetrable.

Python comes handy with it, bcrypt is a library which give us a simple interface to use the functionality without getting into much detail.

Now here comes a tricky part we do not compare passwords directly we use something called a constant time function to compare passwords. The sole reason being the normal compare functions are written in such a way that the compare two strings as fast as possible which reduces their efficiency. When we are dealing with passwords accuracy is the most important factor not efficiency hence a constant time comparison function is used.

The PR evolved very organically and finally after discussing about various aspect we landed up writing test cases which exposed one of the vulnerability in the code. We fixed the error and then we went on to complete the PR, Pingou wrote most of the test cases and then after a lot of hard work and working for a long time the PR was finally complete. I even got my name in some of the files.


by fardroid23 at January 24, 2016 01:15 PM

January 23, 2016

Sayan Chowdhury

Fedimg: Course of Action

Fedimg is a Python-powered service which is built to make Fedora-Cloud images available on popular cloud providers, such as Amazon Web Services (AWS). The plans are to support more cloud providers such as Rackspace, Google Compute Engine.

Fedimg listens for the messages from Fedmsg for the Koji image builds.

Plan of work

Storing the AMIs are costing us lot. So the Cloud WG has decided to delete the older AMIs. The deletion criteria would be - Delete the pre-release AMIs after the final release - Delete the nightly build AMIs after 5 days of build

Currently fedimg boots up an instance, and downloads the image to volume and then takes the snapshot to create the AMI.

gholms suggested that instead we can use euca2ools and using the command euca-import-volume we can directly create a volume out of the image. Then take the snapshot and create the AMI.

Currently, the upload method for the EC2 Service is a big chunk of code. My plan is to break code like these into different chunks and add a retry decorator to it. The decorator will handle things like delay, number of times to run etc.

Currently, Tunir has the feature to test the AMIs but it’s in the development branch and before starting off with this issue we need to work on autocloud too in between to integrate the feature to test AMIs.

This is an easy-fix to add to add standard logging and can be completed with the other tasks in progress.

This issue is blocked on legal stuff. So, as I work on other issues I will be checking the status for this ticket and can swap some low-priority issue with this and write up the service which is clear to go.

Interested?

If you are interested to help out you can ping on IRC in #fedora-apps on Freenode.

January 23, 2016 06:18 AM

January 17, 2016

Sayan Chowdhury

Fedora Meetup Pune - January 2016

On 15th January 2016, we had our first Fedora meetup at Pune. The venue was earlier decided to be Red Hat office, Pune but due to unavailability of space the meetup was moved to my apartment.

The meetup was scheduled to start around 14:30 IST but with people pouring in a bit late, we started the event at 15:00 IST. The turnout was good and much more than what we had expected with a total of 18 people turning out. 4 Fedora Ambasaador were present at the meetup, Praveen, Siddesh, Kushal and /me.

Kushal talking

The meetup started off with a round of introduction. As couple of them where first-timers, Kushal and Siddesh started talking on how to start contributing to FOSS from a programmer’s perspective.

Then, I started off with introduction of Bugyou. I explained the architecture, working and the future plans for Bugyou. Next, Kushal explained the automatic testing of the Cloud/Atomic images in Fedora and also gave a small hands-on demo of the Tunir and Tunirtests

/me talking (Picture taken by Kushal)

Praveen later took and hands-on workshop of Ansible. He showed them step-by-step process of writing Inventory, Playbooks etc and completed with an installation of Apache. After the hands-on, the participants tried their hands on Ansible by writing a script to copy a file from one place to another.

Praveen talking

At the end, we discussed regarding the future meetups, what all topics we could discuss on etc. Our next meetup would be on 22nd January, 2016 from 1700 IST. We were request to talk on RPM Packaging, a session on C Programming by Siddesh and Python 2 to 3 migration.

January 17, 2016 02:58 PM

January 15, 2016

Trishna Guha

Handling Deadlock

This post is all about handling deadlock and what I did to avoid deadlock. This is the first time I have worked on handling deadlock. I have chosen an issue from fedora-bodhi which was about handling deadlock. The error was ”Deadlock found when trying to get lock; try restarting transaction” which is usually common when it comes to database.

Deadlock is an unwanted situation that arises when a process waits for indefinite time for a resource that is held by another process. So there is some way to avoid deadlock or handle deadlock. The transaction involved with the deadlock can either be rolled back or restarted.

try:
    Perform table transaction
    break
except:
    Catch the error
    Try again to perform table transaction

Lmacken guided me with some hints to handle Bugzilla deadlock. Since it was bugzilla server side issue it was not simple for me to reproduce the bug. But it was quite easy to handle bugzilla server-side deadlock. The deadlock was arising when an user was going to add comment to a bug in Bugzilla. So it was definitely a server side issue but could be handled with some little trick in bodhiLmacken told me to catch the fault triggered by the call of adding comment and try again. It was an xmlrpclib.Fault exception. Thus the technique to handle deadlock is the same as declared above. And I was finally done with my patch. Thanks to Lmacken for helping me to solve the bug and understand the technique :).

Hence I learnt how to handle deadlock and my pull request was merged :)


by Trishna Guha at January 15, 2016 11:34 AM

December 30, 2015

Shakthi Kannan

Introduction to Haskell - Network Programming

[Published in Open Source For You (OSFY) magazine, May 2015 edition.]

In this article we shall explore network programming in Haskell.

Let us begin with a simple TCP (Transmission Control Protocol) client and server example. The network package provides a high-level interface for communication. You can install the same in Fedora, for example, using the following command:

$ sudo yum install ghc-network

Consider the following simple TCP client code:

-- tcp-client.hs

import Network
import System.IO

main :: IO ()
main = withSocketsDo $ do
         handle <- connectTo "localhost" (PortNumber 3001)
         hPutStr handle "Hello, world!"
         hClose handle

After importing the required libraries, the main function connects to a localhost server running on port 3001, sends a string “Hello, world!”, and closes the connection.

The connectTo function defined in the Network module accepts a hostname, port number and returns a handle that can be used to transfer or receive data.

The type signatures of the withSocketsdo and connectTo functions are as under:

ghci> :t withSocketsDo
withSocketsDo :: IO a -> IO a

ghci> :t connectTo
connectTo :: HostName -> PortID -> IO GHC.IO.Handle.Types.Handle

The simple TCP server code is illustrated below:

-- tcp-server.hs

import Network
import System.IO

main :: IO ()
main = withSocketsDo $ do
         sock <- listenOn $ PortNumber 3001
         putStrLn "Starting server ..."
         handleConnections sock

handleConnections :: Socket -> IO ()
handleConnections sock = do
  (handle, host, port) <- accept sock
  output <- hGetLine handle
  putStrLn output
  handleConnections sock

The main function starts a server on port 3001 and transfers the socket handler to a handleConnections function. It accepts any connection requests, reads the data, prints it to the server log, and waits for more clients.

Firstly, you need to compile the tcp-server.hs and tcp-client.hs files using GHC:

$ ghc --make tcp-server.hs
[1 of 1] Compiling Main             ( tcp-server.hs, tcp-server.o )
Linking tcp-server ...

$ ghc --make tcp-client.hs 
[1 of 1] Compiling Main             ( tcp-client.hs, tcp-client.o )
Linking tcp-client ...

You can now start the TCP server in a terminal:

$ ./tcp-server 
Starting server ...

You can then run the TCP client in another terminal:

$ ./tcp-client

You will now observe the “Hello, world” message printed in the terminal where the server is running:

$ ./tcp-server 
Starting server ...
Hello, world!

The Network.Socket package exposes more low-level socket functionality for Haskell and can be used if you need finer access and control. For example, consider the following UDP (User Datagram Protocol) client code:

-- udp-client.hs

import Network.Socket

main :: IO ()
main = withSocketsDo $ do
         (server:_) <- getAddrInfo Nothing (Just "localhost") (Just "3000")
         s <- socket (addrFamily server) Datagram defaultProtocol
         connect s (addrAddress server)
         send s "Hello, world!"
         sClose s

The getAddrInfo function resolves a host or service name to a network address. A UDP client connection is then requested for the server address, a message is sent, and the connection is closed. The type signatures of getAddrInfo, addrFamily, and addrAddress are given below:

ghci> :t getAddrInfo
getAddrInfo
  :: Maybe AddrInfo
     -> Maybe HostName -> Maybe ServiceName -> IO [AddrInfo]

ghci> :t addrFamily
addrFamily :: AddrInfo -> Family

ghci> :t addrAddress
addrAddress :: AddrInfo -> SockAddr

The corresponding UDP server code is as follows:

-- udp-server.hs

import Network.Socket

main :: IO ()
main = withSocketsDo $ do
         (server:_) <- getAddrInfo Nothing (Just "localhost") (Just "3000")
         s <- socket (addrFamily server) Datagram defaultProtocol
         bindSocket s (addrAddress server) >> return s
         putStrLn "Server started ..."
         handleConnections s

handleConnections :: Socket -> IO ()
handleConnections conn = do
  (text, _, _) <- recvFrom conn 1024
  putStrLn text
  handleConnections conn

The UDP server binds to localhost and starts to listen on port 3000. When a client connects, it reads a maximum of 1024 bytes of data, prints it to stdout, and waits to accept more connections. You can compile the udp-server.hs and udp-client.hs files using the following commands:

$ ghc --make udp-server.hs 
[1 of 1] Compiling Main             ( udp-server.hs, udp-server.o )
Linking udp-server ...

$ ghc --make udp-client.hs 
[1 of 1] Compiling Main             ( udp-client.hs, udp-client.o )
Linking udp-client ...

You can start the UDP server in one terminal:

$ ./udp-server 
Server started ...

You can then run the UDP client in another terminal:

$ ./tcp-client

You will now see the “Hello, world!” message printed in the terminal where the server is running:

$ ./udp-server 
Server started ...
Hello, world!

The network-uri module has many useful URI (Uniform Resource Identifier) parsing and test functions. You can install the same on Fedora using the following command:

$ cabal install network-uri

The parseURI function takes a string and attempts to convert it into a URI. It returns ‘Nothing’ if the input is not a valid URI, and returns the URI, otherwise. For example:

ghci> :m + Network.URI

ghci> parseURI "http://www.shakthimaan.com"
Just http://www.shakthimaan.com

ghci> parseURI "shakthimaan.com"
Nothing

The type signature of the parseURI function is given below:

ghci> :t parseURI
parseURI :: String -> Maybe URI

A number of functions are available for testing the input URI as illustrated in the following examples:

ghci> isURI "shakthimaan.com"
False

ghci> isURI "http://www.shakthimaan.com"
True

ghci> isRelativeReference "http://shakthimaan.com"
False

ghci> isRelativeReference "../about.html"
True

ghci> isAbsoluteURI "http://www.shakthimaan.com"
True

ghci> isAbsoluteURI "shakthimaan.com"
False

ghci> isIPv4address "192.168.100.2"
True

ghci> isIPv6address "2001:0db8:0a0b:12f0:0000:0000:0000:0001"
True

ghci> isIPv6address "192.168.100.2"
False

ghci> isIPv4address "2001:0db8:0a0b:12f0:0000:0000:0000:0001"
False

The type signatures of the above functions are as follows:

ghci> :t isURI
isURI :: String -> Bool

ghci> :t isRelativeReference
isRelativeReference :: String -> Bool

ghci> :t isAbsoluteURI
isAbsoluteURI :: String -> Bool

ghci> :t isIPv4address
isIPv4address :: String -> Bool

ghci> :t isIPv6address
isIPv6address :: String -> Bool

You can make a GET request for a URL and retrieve its contents. For example:

import Network
import System.IO

main = withSocketsDo $ do
    h <- connectTo "www.shakthimaan.com" (PortNumber 80)
    hSetBuffering h LineBuffering
    hPutStr h "GET / HTTP/1.1\nhost: www.shakthimaan.com\n\n"
    contents <- hGetContents h
    putStrLn contents
    hClose h

You can now compile and execute the above code, and it returns the index.html contents as shown below:

$ ghc --make get-network-uri.hs
[1 of 1] Compiling Main             ( get-network-uri.hs, get-network-uri.o )
Linking get-network-uri ...

$ ./get-network-uri 
HTTP/1.1 200 OK
Date: Sun, 05 Apr 2015 01:37:19 GMT
Server: Apache
Last-Modified: Tue, 08 Jul 2014 04:01:16 GMT
Accept-Ranges: bytes
Content-Length: 4604
Content-Type: text/html
...

You can refer to the network-uri package documentation at https://hackage.haskell.org/package/network-uri-2.6.0.1/docs/Network-URI.html for more detailed information.

The whois Haskell package allows you to query for information about hosting servers and domain names. You can install the package on Ubuntu, for example, using:

$ cabal install whois

The serverFor function returns a whois server that can be queried for obtaining more information regarding an IP or domain name. For example:

ghci> :m + Network.Whois

ghci> serverFor "shakthimaan.com"
Loading package array-0.4.0.1 ... linking ... done.
Loading package deepseq-1.3.0.1 ... linking ... done.
Loading package bytestring-0.10.0.2 ... linking ... done.
Loading package old-locale-1.0.0.5 ... linking ... done.
Loading package time-1.4.0.1 ... linking ... done.
Loading package unix-2.6.0.1 ... linking ... done.
Loading package network-2.6.0.2 ... linking ... done.
Loading package transformers-0.4.3.0 ... linking ... done.
Loading package mtl-2.2.1 ... linking ... done.
Loading package text-1.2.0.4 ... linking ... done.
Loading package parsec-3.1.9 ... linking ... done.
Loading package network-uri-2.6.0.1 ... linking ... done.
Loading package split-0.2.2 ... linking ... done.
Loading package whois-1.2.2 ... linking ... done.

Just (WhoisServer {hostname = "com.whois-servers.net", port = 43, query = "domain "})

You can use the above specific information with the whois1 function to make a DNS (Domain Name System) query:

ghci> whois1 "shakthimaan.com" WhoisServer {hostname = "com.whois-servers.net", port = 43, query = "domain "}
Just "\nWhois Server Version 2.0\n\nDomain names in the .com and .net domains can now be registered\n
...

You can also use the whois function to return information on the server as shown below:

ghci> whois "shakthimaan.com"
Just "\nWhois Server Version 2.0\n\nDomain names in the .com and .net domains can now be registered\n
...

The type signatures of serverFor, whois1 and whois functions are as follows:

ghc> :t serverFor
serverFor :: String -> Maybe WhoisServer

ghci> :t whois1
whois1 :: String -> WhoisServer -> IO (Maybe String)

ghci> :t whois
whois :: String -> IO (Maybe String, Maybe String)

The dns package provides a number of useful functions to make Domain Name System queries, and handle the responses. You can install the same on Ubuntu, for example, using the following commands:

$ sudo apt-get install zlib1g-dev
$ cabal install dns

A simple example of finding the IP addresses for the haskell.org domain is shown below:

ghci> import Network.DNS.Lookup
ghci> import Network.DNS.Resolver

ghci> let hostname = Data.ByteString.Char8.pack "www.haskell.org"

ghci> rs <- makeResolvSeed defaultResolvConf

ghci> withResolver rs $ \resolver -> lookupA resolver hostname
Right [108.162.203.60,108.162.204.60]

The defaultResolvConf is of type ResolvConf and consists of the following default values:

--     * 'resolvInfo' is 'RCFilePath' \"\/etc\/resolv.conf\".
--
--     * 'resolvTimeout' is 3,000,000 micro seconds.
--
--     * 'resolvRetry' is 3.

The makeResolvSeed, and withResolver functions assist in making the actual DNS resolution. The lookupA function obtains all the A records for the DNS entry. Their type signatures are shown below:

ghci> :t makeResolvSeed
makeResolvSeed :: ResolvConf -> IO ResolvSeed

ghci> :t withResolver
withResolver :: ResolvSeed -> (Resolver -> IO a) -> IO a

ghci> :t lookupA
lookupA
  :: Resolver
     -> dns-1.4.5:Network.DNS.Internal.Domain
     -> IO
          (Either
             dns-1.4.5:Network.DNS.Internal.DNSError
             [iproute-1.4.0:Data.IP.Addr.IPv4])

The lookupAAAA function returns all the IPv6 ‘AAAA’ records for the domain. For example:

ghci> withResolver rs $ \resolver -> lookupAAAA resolver hostname
Right [2400:cb00:2048:1::6ca2:cc3c,2400:cb00:2048:1::6ca2:cb3c]

Its type signature is shown below:

lookupAAAA
  :: Resolver
     -> dns-1.4.5:Network.DNS.Internal.Domain
     -> IO
          (Either
             dns-1.4.5:Network.DNS.Internal.DNSError
             [iproute-1.4.0:Data.IP.Addr.IPv6])

The MX records for the hostname can be returned using the lookupMX function. An example for the shakthimaan.com website is as follows:

ghci> import Network.DNS.Lookup
ghci> import Network.DNS.Resolver

ghci> let hostname = Data.ByteString.Char8.pack "www.shakthimaan.com"

ghci> rs <- makeResolvSeed defaultResolvConf

ghci> withResolver rs $ \resolver -> lookupMX resolver hostname
Right [("shakthimaan.com.",0)]

The type signature of the lookupMX function is as under:

ghci> :t lookupMX
lookupMX
  :: Resolver
     -> dns-1.4.5:Network.DNS.Internal.Domain
     -> IO
          (Either
             dns-1.4.5:Network.DNS.Internal.DNSError
             [(dns-1.4.5:Network.DNS.Internal.Domain, Int)])

The nameservers for the domain can be returned using the lookupNS function. For example:

ghci> withResolver rs $ \resolver -> lookupNS resolver hostname
Right ["ns22.webhostfreaks.com.","ns21.webhostfreaks.com."]

The type signature of the lookupNS function is shown below:

ghci> :t lookupNS
lookupNS
  :: Resolver
     -> dns-1.4.5:Network.DNS.Internal.Domain
     -> IO
          (Either
             dns-1.4.5:Network.DNS.Internal.DNSError
             [dns-1.4.5:Network.DNS.Internal.Domain])

You can also return the entire DNS response using the lookupRaw function as illustrated below:

ghci> :m + Network.DNS.Types

ghci> let hostname = Data.ByteString.Char8.pack "www.ubuntu.com"

ghci> rs <- makeResolvSeed defaultResolvConf

ghci> withResolver rs $ \resolver -> lookupRaw resolver hostname A
Right (DNSFormat 
  {header = DNSHeader 
    {identifier = 29504, 
     flags = DNSFlags 
       {qOrR = QR_Response, 
        opcode = OP_STD, 
        authAnswer = False, 
        trunCation = False, 
        recDesired = True, 
        recAvailable = True, 
        rcode = NoErr}, 
     qdCount = 1, 
     anCount = 1, 
     nsCount = 3, 
     arCount = 3}, 
   question = [
     Question 
       {qname = "www.ubuntu.com.", 
        qtype = A}], 
     answer = [
       ResourceRecord 
         {rrname = "www.ubuntu.com.", 
          rrtype = A, 
          rrttl = 61, 
          rdlen = 4, 
          rdata = 91.189.89.103}], 
     authority = [
       ResourceRecord 
         {rrname = "ubuntu.com.", 
          rrtype = NS, 
          rrttl = 141593, 
          rdlen = 16, 
          rdata = ns2.canonical.com.},
       ResourceRecord 
         {rrname = "ubuntu.com.", 
          rrtype = NS, 
          rrttl = 141593, 
          rdlen = 6, 
          rdata = ns1.canonical.com.},
       ResourceRecord
         {rrname = "ubuntu.com.", 
          rrtype = NS, 
          rrttl = 141593, 
          rdlen = 6, 
          rdata = ns3.canonical.com.}], 
    additional = [
      ResourceRecord 
        {rrname = "ns2.canonical.com.", 
         rrtype = A, 
         rrttl = 88683, 
         rdlen = 4, 
         rdata = 91.189.95.3},
      ResourceRecord 
        {rrname = "ns3.canonical.com.", 
         rrtype = A, 
         rrttl = 88683, 
         rdlen = 4, 
         rdata = 91.189.91.139},
      ResourceRecord 
        {rrname = "ns1.canonical.com.", 
         rrtype = A, 
         rrttl = 88683, 
         rdlen = 4, 
         rdata = 91.189.94.173}]})

Please refer the Network.DNS hackage web page https://hackage.haskell.org/package/dns for more information.

December 30, 2015 05:30 PM

December 24, 2015

Praveen Kumar

Vagrant DNS with Landrush and Virtualbox and dnsmasq

Landrush is pretty neat vagrant plugin if you need a DNS server which is visible to host and guest. For Mac OS it work out of the box but to make it work in Linux we have to make some configuration changes to dnsmasq.

I assume that you are using latest vagrant and Virtualbox for this experiment. If you are using libvirt than please refer to Josef blogpost.

Landrush DNS server runs on port 10053 (localhost) instead of 53 so we have to make entry to redirect requested domain name to our Landrush. Follow below steps and lets configure it.

Add following to /etc/dnsmasq.conf
listen-address=127.0.0.1

Will create below file which redirect our .vm traffic to Landrush
$ cat /etc/dnsmasq.d/vagrant-landrush
server=/.vm/127.0.0.1#10053

Will start/restart dnsmasq service and check status (should be active)
$ sudo systemctl start dnsmasq.service
$ sudo systemctl status dnsmasq.service
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2015-12-24 11:57:47 IST; 2s ago
Main PID: 19969 (dnsmasq)
CGroup: /system.slice/dnsmasq.service
└─19969 /usr/sbin/dnsmasq -k
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com systemd[1]: Started DNS caching server..
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com systemd[1]: Starting DNS caching server....
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: started, version 2.75 cachesize 150
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC ...ct inotify
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 127.0.0.1#10053 for domain vm
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: reading /etc/resolv.conf
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 127.0.0.1#10053 for domain vm
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 10.75.5.25#53
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 10.68.5.26#53
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: read /etc/hosts - 2 addresses
Hint: Some lines were ellipsized, use -l to show in full.


Make sure you put '127.0.0.1' as nameserver to /etc/resolve.conf at first place.
$ cat /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8
nameserver 4.4.4.4

Make following change to your vagrant file
$cat Vagrantfile
PUBLIC_ADDRESS=10.1.2.2
PUBLIC_HOST= "your_host.vm"
config.vm.network "private_network", ip: "#{PUBLIC_ADDRESS}"
config.vm.hostname = "#{PUBLIC_HOST}"
config.landrush.enabled = true
config.landrush.host_ip_address = "#{PUBLIC_ADDRESS}"
config.landrush.tld = ".vm"
confg.landrush.guest_redirect_dns = false

$ vagrant landrush ls
your_host.vm 10.1.2.2
2.2.1.10.in-addr.arpa your_host.vm

$ sudo netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN 2946/dropbox
tcp 0 0 10.65.193.61:44319 0.0.0.0:* LISTEN 14810/weechat-curse
tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN 2946/dropbox
tcp 0 0 127.0.0.1:45186 0.0.0.0:* LISTEN 433/GoogleTalkPlugi
tcp 0 0 127.0.0.1:39715 0.0.0.0:* LISTEN 433/GoogleTalkPlugi
tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 2946/dropbox
tcp 0 0 0.0.0.0:10053 0.0.0.0:* LISTEN 14966/ruby-mri
tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 15200/VBoxHeadless
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 16871/dnsmasq
tcp 0 0 192.168.121.1:53 0.0.0.0:* LISTEN 16817/dnsmasq
tcp 0 0 192.168.124.1:53 0.0.0.0:* LISTEN 16810/dnsmasq
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2647/cupsd
tcp6 0 0 ::1:631 :::* LISTEN 2647/cupsd

$ ping your_host.vm
PING your_host.vm (10.1.2.2) 56(84) bytes of data.
64 bytes from 10.1.2.2: icmp_seq=1 ttl=64 time=0.332 ms
64 bytes from 10.1.2.2: icmp_seq=2 ttl=64 time=0.238 ms


by Praveen Kumar (noreply@blogger.com) at December 24, 2015 07:06 AM