Planet dgplug

August 29, 2016

Trishna Guha

PythonPune August Meetup: Back to Basics

14124473_1460611200623114_6973731019597283748_o

PythonPune August Meetup was held at Red Hat Pune on 28th August, 2016. So Thanks for supporting us always! Many people came down for the meetup. Some of them were students and some were professionals.

This time the topic was Back to Basics[Python3 Workshop]. We all know that the support for Python2 is going to over by 2020. Here is the Python2 Countdown Clock https://pythonclock.org. So it is the time to brush up the basic skills for Python3.  That was the motive for the meetup.

The meetup started with introduction. Chandan Kumar introduced me to the crowd and all of them gave their introduction as well.

We started with basic “Hello World” program and we covered up to “Modules” of Python3🙂.

Lots of questions came up when we started discussing about Data structure, Slicing, File handling and Modules.

At the end of the workshop people were given a problem to solve. The response was pretty good🙂. Later we told about the next meetup  of PyladiesPune and the meetup ended with discussing about contributing to Open Source Projects. Chandan helped me to keep the session pretty interactive, so thanks to him🙂.

Here is the slide for Back to Basics[Python3]: slides.com/trishnag/back-to-basics-python3-101-workshop

Me giving the talk:

Cq7KdYVXEAAOH6i


by Trishna Guha at August 29, 2016 05:43 PM

August 24, 2016

Sayan Chowdhury

Autocloud: What's new?

Autocloud was released during the Fedora 23 cycle as a part of the Two Week Atomic Process.

Previously, it used to listen to fedmsg for successful Koji builds. Whenever, there is a new message the AutocloudConsumer queues these message for processing. The Autocloud job service then listens to the queue, downloads the images and runs the tests using Tunir. A more detailed post about it’s release can be read here.

During the Fedora 24 cycle things changed. There was a change on how the Fedora composes are built. Thanks to adamw for writing a detailed blogpost on what, why and how things changed.

With this change now autocloud listens to the compose builds over the fedmsg. The topic being “org.fedoraproject.prod.pungi.compose.status.change”. It checks for the messages with the status FINISHED and FINISHED_INCOMPLETE.

After the filtration, it gets the Cloud Images built during that particular compose using a tool called fedfind. The job here is parse the metadata of the compose and getting the Cloud Images. These images are then queued into both libvirt and vbox boxes. The Autocloud job service then downloads the images and run the tests using Tunir.

Changes in the Autocloud fedmsg messages.

Earlier the messages with the following topics were sent

Now along with the fedmsg message for the status of the image test. Autocloud also sends messages for the status of a particular compose.

The compose_id field was added to the autocloud.image.* messages

Changes in the UI

  • A page was added to list all the composes. It gives an overview of the composes like if it’s still running, number of tests passed, etc
  • The jobs page lists all the tests data as earlier. We added filtering to the page so filter the jobs based on various params
  • You need to agree that the jobs output page looks better than before. Now, rather the showing a big dump of text the output now is properly formatted. You can now reference each line separately.

Right now, we are planning to work on testing the images uploaded via fedimg in Autocloud. If the project looks interesting and you are planning to contribute? Ping us on #fedora-apps on Freenode.

August 24, 2016 11:58 AM

August 22, 2016

Kushal Das

Setting up a home music system with Raspberry Pi3 and MPD

I had one Raspberry Pi3 in my home office (actually it was missing for few months). Found it two nights back, and decided to put it in use. Best part is the on-board WiFi. This means I can put it anywhere in the house, and still access it. I generally use Raspbian on my Pi(s), so did the same this time too. After booting from a fresh SD card, did the following to install Music Player Daemon.

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install mpd
$ sudo systemctl enable mpd
$ sudo systemctl start mpd

This got MPD running on the system, the default location of the songs directory is /var/lib/mpd/music. You can setup the location in /etc/mpd.conf file. But this time when ever I changed a song, the service stopped. I had to restart the service. After poking around for sometime, I found that I have to uncomment the following from the mpd.conf file.

device "hw:0,0"

I also changed the value of mixer_type to software, that enables volume control from the software. After a restart everything worked as planned. I have MPD clients on my phone (and also even on Anwesha’s phone, and on mother-in-law’s tablet), and also on laptop.

The above is the screenshot of the client on phone. On my Fedora laptop I installed a new client called cantata.

$ sudo dnf install cantata

If you have any tips on MPD, or on similar setups, feel free to comment, or drop a note on twitter/mail. Happy music everyone.

by Kushal Das at August 22, 2016 03:12 PM

August 20, 2016

Sayan Chowdhury

Fedora Meetup Pune August 2016

Fedora Pune Meetup for the month of August 2016 happened today at our usual location. We had in total 12 people turning out for the meetup.

The event started with introductions and we had two new comers joining us this time, Trishna and Prathamesh.

This time the event was mostly foccused around re-writing the GNU C Library Manual using reStructuredText and Sphinx. This task was decided during the release event that we had last month. We did create a Etherpad link to maintain the status of the task1.

The aim is to build a modern version, good looking version of the GNU C Library Manual.

In today’s meetup, we sat down and tried completing the chapters we picked. A couple of us sent a PRs to the docs repo that we are maintaing in Github. The generated read the docs can be seen here

If you are planning to contribute, ping /me (sayan) or kushal in #dgplug channel on Freenode.

August 20, 2016 04:22 PM

Farhaan Bukhsh

GSoC: Final Submission

This summer has been really amazing, I learnt a lot and worked crazy hours it has been a crazy yet amazing ride. I am not going to stop working on open source projects and with Pagure it is something really close to my heart.

There are a few things left but I can conclude that I am able to achieve what I wanted to at the beginning of this program , but there is never a feeling of satisfaction it is just like you want to achieve the best possible and most beautiful solution.

Pagure has CI integration which was one of my major goals to achieve and with the coming release it will be out and will be usable to people. This gives me immense pleasure to say that the foundation of CI was laid by me although Pingou kind of wrote a lot after that but that helped me to learn the depth of thinking one needs to have when you are working on a feature like this. Selection_027

I also worked on Private Repo feature which took more time than expected and it was pretty challenging to achieve , this feature is in  feature branch and it may get merged after it is checked in the staging first. Selection_028

It was so challenging that I got stuck on a data fetching problem from the database , we use Sqlalchemy as ORM in Pagure. I went through a lot of ups and downs at times I was about to give up but then I get some small part of it and Pingou has been so amazing mentor he never spoon fed me instead he asked the right question the moment he ask something the idea bulb use to glow.

I still remember struggling with  Xapian and Whoosh. This was again a very big task and still is , it requires a lot of time to optimize it to a level where it doesn’t slow the site. I gave a lot of time on it but since I few other goals and various issue to solve so I eventually moved on to those just to come back.

Pagure pages is one of the last goals that I worked on recently and there are discussion pending over it.

At a glance I was able to achieve a lot of the big goals on my proposal and still work has to be done, and I will continue to work on achieving various other goals. Few links that I want to share :

Commits made to the master branch

Commits on private-repo branch on pagure 

Pull-request for static page hosting

This kinds of makes me feel happy that I have around 102 commits on the master branch now and I believing I will be working a lot more on Pagure to bring a lot of cool and useful feature to it. In case you have any suggestion feel free to file issues on Pagure.

To be really frank I am not at all sad that GSoC is getting because I have received so much love and inspiration from Fedora Community that contributing to projects has actually become my daily routine the day I don’t commit code, review patches or comment on issues I start feeling something is missing .

And some of my fellow GSoCers said That’s all folks!  ;)

Happy Hacking!

 


by fardroid23 at August 20, 2016 03:35 PM

August 19, 2016

Trishna Guha

IRC Client: Irssi On Atomic Host

If you are a terminal geek you will always want to do things using terminal😉. And when it comes to Atomic host, YES you will have to do stuffs using terminal.

If you don’t know about Atomic, you must visit http://www.projectatomic.io🙂

This post will describe how to setup and use IRC client on Atomic host. This will be applicable for any Cloud host also.

Irssi is a terminal based IRC client for Unix/Linux systems. And the best part is we will not need to setup things manually because we have containers🙂.

Let’s Get Stared:

I am using Fedora Atomic host here. Get Fedora atomic host from herehttps://getfedora.org/en/cloud/download/atomic.html

Make Sure you have Docker installed.

Copy the Dockerfile from here: https://github.com/trishnaguha/Fedora-Dockerfiles/blob/irssi/irssi/Dockerfile

Now run docker build -t username/irssi .This will build image.

There after you just need to run the container🙂  docker run -it username/irssi.

Later on sometime you will be able to do the whole set up only docker run -it fedora/irssi once Fedora adds Irssi to its Docker hub :).

After you start the container you will see something like this:

Screenshot from 2016-08-19 14-12-05

Let’s join a channel

Screenshot from 2016-08-19 14-14-16

You will find the Irssi Commands here: Irssi Commands.


by Trishna Guha at August 19, 2016 09:27 AM

August 18, 2016

Shakthi Kannan

GNU Emacs - Search, Frames and Windows

In this next article in the GNU Emacs series, we shall learn how to perform text search in a buffer, and introduce the concept of windows and frames.

Search

You can copy the following poem, which I wrote in 2013, in the scratch buffer or a file inside GNU Emacs to try out the search commands:

Emacs is, an operating system 
Which unlike many others, is truly, a gem 
Its goodies can be installed, using RPM 
Or you can use ELPA, which has already packaged them 

You can customize it, to your needs 
You can also check EmacsWiki, for more leads 
Your changes work, as long as reload succeeds 
And helps you with, your daily deeds 

People say, it lacks a decent editor 
But after using its features, they might want to differ 
Using Magit’s shortcuts, you might infer 
That it is something, you definitely prefer 

Plan your life, with org-mode 
You don’t necessarily need, to write code 
TODO lists and agenda views, can easily be showed 
Reading the documentation, can help you come aboard 

Emacs is, a double-edged sword 
Its powerful features, can never be ignored 
Customization is possible, because of Free Software code 
And this is, my simple ode.

You can search for a word in a buffer using C-s shortcut. You will then be prompted with I-Search: in the minibuffer where you can type any text, and GNU Emacs will try to find words matching it, in the buffer. This is an incremental forward search and is case insensitive. Thus, if you search for the word ‘todo’ in the poem, it will match the string ‘TODO’. You can exit from the incremental search using the Enter key, or abort the search using the C-g key combination. If you want to do an incremental search in the reverse direction - from the cursor position to the top of the buffer – you can use the C-r shortcut.

If you place the cursor on the letter ‘E’ in ‘Emacs’ in the poem’s first line, and press C-s C-w, Emacs will try to find all occurrences of the word ‘Emacs’ in the text. Suppose, you have cut or copied text to the kill ring, you can search for this text by using C-s C-y shortcut. You can repeat the previous forward search using C-s C-s, and the previous backward search using C-r C-r shortcuts.

The first occurrence of a text can be looked up in the forward direction using C-s. This will prompt you in the minibuffer with a Search: string where you can type the text that you want to search for, and then press the Enter key. It will then search for the text and move the cursor to the matching word. This is a non-incremental forward search. Similarily, you can perform a non-incremental backward search using C-r. You can then input the search string to be searched for, followed by the Enter key.

Regular expression searches are very useful too. In order to search forward for an expression, you can use C-M-s followed by the Enter key, which will prompt you in the minibuffer with the string ‘Regexp search:’. You can then enter a regular expression. This will only match the first occurrence of the text and the search will then terminate. You can perform a one-time backward regular expression search using C-M-r shortcut. To perform an incremental forward search, you need to use C-M-s and you will be prompted with the string ‘Regexp I-search:’, where you can provide the pattern to match. For example, ‘[a-z]+-[a-z]+’ will match both the expressions ‘org-mode’ and ‘double-edged’ words in the poem. You can use C-M-r for an incremental backward regex search.

A common use case is to find and replace text in a buffer. The sequence to be used is M-x query-replace followed by the Enter key. You will then be prompted with the string ‘Query replace:’ where you will be asked which word or phrase is to be replaced. For example, if you mention ‘ode’, it will again prompt you with ‘Query replace ode with:’ and then you can enter the replacement string. You can also search and replace text by matching a regular expression with the C-M-% shortcut key combination.

Frames

The outermost user interface boundary of GNU Emacs is called a frame. In fact, when you split the GNU Emacs user interface, you are actually creating windows. So, in GNU Emacs, you have windows inside a frame. This is in contrast to today’s user applications, where the entire application is contained in a ‘window’. This is an important terminology to remember when using GNU Emacs.

You can create a new frame using C-x 5 2 key combination. You can move the cursor to the next frame using C-x 5 o (letter ‘o’), and delete the current frame using C-x 5 0 (zero) shortcut. This will not delete the existing buffers, but, only the view. In order to open a file in a new frame, you can use C-x 5 f. You can also open a file in a new frame in read-only mode using C-x 5 r. To switch to the buffer in a new frame, use C-x 5 b key combination.

Windows

You can split a frame vertically to create two windows using C-x 2 (Figure 1).

Split frame vertically

To split horizontally, you can use C-x 3 (Figure 2).

Split frame horizontally

To move the cursor to the next window, use C-x o (the letter ‘o’). You can delete the current window using C-x 0 (zero). Note that this does not delete the buffer, but, just the view. If you have multiple windows and you want to retain the current window and remove the rest of the windows from the display, you can use C-x 1.

You can open a file in a new window using C-x 4 f. You can also select an existing buffer in another window using C-x 4 b. If you have multiple windows that you would like to be balanced equally, you can use C-x +. Figure 3 shows an Emacs screenshot with three windows that are balanced.

Balanced windows

You can scroll the contents in the other window using C-M-v. You can scroll backwards using C-M-Shift-v.

You can also use the following shortcuts in your ~/.emacs to simplify the shortcuts used to split and remove windows.

(global-set-key (kbd "C-1") 'delete-other-windows)
(global-set-key (kbd "C-2") 'split-window-below)
(global-set-key (kbd "C-3") 'split-window-right)
(global-set-key (kbd "C-0") 'delete-window)

If you would like to make a window wider, you can use C-x } shortcut and to reduce it horizontally, you will need to use C-x {. You can use a prefix count to perform the operation ‘n’ times. For example, C-u 5 C-x { to shrink a window horizontally. To make a window taller, you can use C-x ^ shortcut; and to make it smaller, you have to use a negative prefix. For example, C-u -1 C-x ^. A screenshot of a custom Emacs frame with three windows is shown in Figure 4.

Custom windows in a frame

August 18, 2016 11:00 AM

August 15, 2016

Kushal Das

Event report: Flock 2016

This year’s Flock was help in Krakow, Poland, from 2nd to 5th August. /me, and Sayan started our journey on 30th from Pune, and we reached Krakow finally on 31st afternoon. Ratnadeep joined us from the Frankfurt. Patrick was my roommate this time, he reached few hours after we reached.

Day -1

Woke up early, and started welcoming people near the hotel entrance. Slowly the whole hotel was filling up with Flock attendees. Around noon, few of us decided to visit the Oskar Schindler’s Enamel Factory. This is place I had in my visit list for a long time (most of those are historical places), finally managed to check that off. Then walked back to city center, and finally back to the hotel.

Started meeting a lot more people in the hotel lobby. The usual stay up till late night continued in this conference too. Only issue was about getting up early, somehow I could not wake up early, and write down the daily reports as I did last year.

Day 0

Managed to reach breakfast with enough time to eat before the keynote starts. Not being able to find good espresso was an issue, but Amanda later pointed me to the right place. I don’t know how she manages to do this magic everytime, she can really remove/fix any blocker for any project :)

Received the conference badge, and other event swags from registration desk. This one is till date the most beautiful badge I have seen. Mathew gave his keynote on “The state of Fedora”. Among many other important stats he shared, one point was really noticeable for me. For every single Red Hat employee who contributes to Fedora, there are at least two contributors from the community. This is a nice sign of a healthy community.

After the keynote I started attending the hallway tracks as usual. I went to this conference with a long of topics I need discuss with various people. I managed to do all of those talks over the 4 days of the event. Got tons of input about my work, and about the project ideas. Now this is the time to make those suggestions into solid contributions.

Later I went into the “The state of Fedora-infra” talk. This was important to me personally, as this gives as easy way to revisit all the infrastructure work going on. Later in the day I attended Fedora Magazine, and university outreach talk.

In the evening there was “Tour of Krakow”, but Fedora Engineering team had a team dinner. As this is only time when all of us meet physically. Food was once again superb.

Day 1

As I mentioned before it was really difficult to wake up. But somehow managed to do that, and reached downstairs before the keynote started. Scratch was mentioned in the keynote as tool they use. Next was usual hallway talks, in the second half I attended the diversity panel talk, and then moved to Pagure talk. I knew that there were a huge list new cool features in Pagure, but learning about them directly from the upstream author is always a different thing. Pingou’s slides also contained many friends’ names, which is always a big happy thing :)

My talk on Testing containers using Tunir was one of the last talk of the day. You can go through the whole presentation, and if you want to see any of the demos, click on those slides. That will open a new tab with a shell console, type as you normally type in any shell (you can type any char), and press Enter as required. I use Tunir to test my personal containers which I run on production. This talk I tried to various such use cases.

At night we went out the river cruising. Before coming back few of us visited the famous Wawel Dragon). I also met Scot Collier for the first time. It is always to meet people with whom you work regularly over internet.

Day 2

It started with lightening talks. I spoke for few minutes about dgplug summer training. You can find the list of talks here. After this in the same room we had “Meet the FAmSCo” session. I managed to meet Gerold, Kanarip, Fabian after 9 years in this Flock. Christoph Wickert took notes, and discussed the major action items in last week’s FAmSCo IRC meeting too. Next I attended “Infrastructure workshop”, after that as usual hallway tracks for me. I was looking forward to have a chat with Dodji Seketeli about his, and Sinny’s work related about ABI stability. Later at night few of us managed to stay up till almost 5AM working :)

Day 3

Last day of Flock 2016. Even after the early morning bed time, somehow managed to pull myself out of the bed, and came down to the lobby. Rest of the day I spent by just talking to people. Various project ideas, demos of ongoing work, working on future goals.

Conclusion

Personally I had a long list of items which I wanted to talk to various people. I think I managed to cross off all of those. Got enough feedback to work on items. In the coming days I will blog on those action items. Meanwhile you can view the photos from the event.

by Kushal Das at August 15, 2016 11:54 AM

Anwesha Das

A walk of licenses in PyPI

When I was doing the license research for the Fedora Project I came searching for other projects to do the similar kind of work. I code in Python. So I was searching that which licenses are being commonly used for Python projects. I did my research regarding Python packages and its licensing in Fedora land (will do a blog post of that in future). While writing code I came across the PyPI project. So, planned my next license analysis project around that.

I searched google for more information about the PyPI project. For definition what I found is the following:

PyPI is the canonical place where the Python modules and packages are stored. If we dissect the term
Py - Python, P - package, I - index. PyPI works as a repository of software for the Python language. PyPI is otherwise known as "Cheese Shop".

Importance of PyPI

It is the official software repository for Python packages where developers can upload as well as download open source software written in Python. PyPI is blessed by Python Software Foundation. Donald Stufft's enormous work makes this project run.

Presently PyPI has 86351 python packages. To see where the Python world (license wise) is heading up, it is the apt and only place to search for.

My work

To give a shape to my project I choose first 2500 packages from the PyPI ranking website. Then I used the JSON API of PyPI for getting the license information of each package. I wrote a simple Python script for that. So what I found was quite interesting:

The following number of Packages has written their license as under mentioned:

  • no name = 128 packages
  • Unknown = 287 packages.

For these I have tried manually to find the licenses, but still could not find licenses of 49 packages. I am still working on it. So now for my study I will consider 49 packages as unknown.

As evident from the chart. BSD is the most used license, used in 655 packages and MIT is the second one used in 567 packages.

11 packages which mentioned their license as "Public Domain" and 7 packages as unlicense. Very few packages have chosen Python Software Foundation License. 7 packages have mentioned their license as - Open Source Initiative approved but have not put in the name of which license exactly they are talking about. 5 packages have mentioned their license as Creative Commons. Expat is their for 7 packages.

Most of the developers(I have met till now) does not really like large legal text. Here for some strange reasons few developers had chosen to write the whole license document text instead of just writing MIT, or BSD, the license name.

Next Steps

Presently I am filling bugs and submitting patches against these various packages to fix their licensing issues. I will be discussing about various software licenses in my coming blog posts.

by Anwesha Das at August 15, 2016 06:53 AM

August 11, 2016

Anwesha Das

PyLadies Pune Chapter flocked again in August

On our July meet up itself we have fixed a date for our August meet up, i.e 10.08.2016.

I had to do a little multitasking to reach the event. My dear husband (when your husband is not around for few weeks he seems to be 'dear':)) was returning from Flock. So we slept at 6 in the morning and woke up at 9AM cause we need to reach the event by 10:30AM or so.

Again thankful to Red Hat and Rupali for letting our event to happen in the Red Hat office. After reaching the venue I saw that the whole set up for meet up was done in the cafeteria. So first two sessions happened there and for the workshop we moved to our favorite " Harishchandraghar" (and that too without trekking hard :)).

Our first speaker was Nisha Menon Poyarekar. Her talk was on Communication - The open source etiquette for our FOSS session. In the open source world we build PyLadies Pune Chapter uniting again in August our image and profile over the internet. People you know you how you talk over the internet. She shared some guidelines about How to ask smart questions?. So before you ask a question please google it. Most of the cases you will get some answer if not then only ask people. She had also shared the piece of advice for communication. Very importantly that we must have to grow a little bit thick skin to work hassle free. Generally the criticism is for your contribution but not for the contributor. The part I loved the most is that she said 'Give Back' to the community. The COMMUNITY is bigger than code. So you should give priority to the community that it should grow, stick together. I personally just loved her talk. Hopefully by attending this session we, PyLadies will not communication mistakes again. Thanks Nisha :)

Then it was the talk to talk session. This time we were listing to the Keynote, PyCon 2015 by Jacob Kaplan Moss. How nice to hear that he started his talk by referring to PyLadies auction :). He said that to be a successful programmer one does not have to be a superbly intelligent great programmer. A mediocre programmer with certain practiced set of skills will be truly successful.He gave stress on the women working in the industry, showed their journey through graphs. I just loved that part where he has mentioned that Lynn Root said that it is not the great programmers in PyLadies that makes her happy but that whole bunch of average women programmers in PyLadies, makes her happy. On a personal note this talk has been an encouragement to me. It gave me the confidence even I can do something good for this community.

The lunch time discussion was very useful. We discussed about the next meet up - dates, topics etc. Also what everyone wants from the group and future of the same.

Next was the 'Hands on Python' session by Chandan Kumar. He covered - string, list and a little bit of file operations and functions. Typing still remains the biggest problem for me :(. It was quite an interactive session. Everyone was participating. Chandan at the end of the session gave a problem to us, i.e. to make a new directory consisting of all the audio files in your system, through a program. So, everyone who has missed the session can try it at home. Thank you Chandan:)

Thanks Ganesh Kadam helping us to do set ups and Praveen Kumar for helping during hands on Session.

See you Ladies at the next meet up on 10th September at Red Hat office.

by Anwesha Das at August 11, 2016 11:48 AM

August 08, 2016

Farhaan Bukhsh

Docs in Pagure

I took this week to hack on this feature called Docs which gives you the ability to host documentation of your project in Pagure. I have never explored this feature before so I started to hack on it .

This feature is pretty straight forward to use. Once you have your project up and running you can go to Setting of the project and under  Project Option  click on Activate Documentation this will actually activate a Doc tab in the main project. This can be used to host your docs specifically now this is a little tricky because you need to clone and push to a different URL, the docs are maintain in a separate location this is due to security concerns. When you activate the Project option you are provided with a Doc specific URL, you need to push your document or static pages to that URL and automatically any page named as index will be taken as the first page.

Selection_026

You have to click on the more button beside GIT URLs to get your Docs URL and then you are good to go to host your static page.

For people who want to hack on Docs in Pagure you need to pull a few tricks to do that.

First and foremost is you need to get the code from pagure.io and then after setting up Pagure for development, you need to run two servers :

  1. Pagure server
  2. Doc server

The script corresponding to them are  runserver and rundocserver.

So if you have ever hacked on Pagure then you will know that you have to log in make repo  and follow the same steps mentioned above to see the Doc tab.

Now comes the tricky part, if you need to see Doc there should be a <project_name>.git created in the docs repo which is not there you just need to copy the file from repo directory to docs. Once this is done you need to clone the project repo from docs delete all the files there put the files you want in the static page , we support a lot of formats like md, rst etc. Add, commit and push and voila you will see them in your local instance.

I am actually working on issue 469 in which Ryan has suggested to make docs more specific to static page  hosting with the architecture that docs is based on this is actually a straight forward task but a really beautiful one which need a bit of deliberation on things we want to achieve.  Hope it gave you insight in what I am trying to to do.

More documentation on this can be found in the usage section of Pagure Docs.

Happy Hacking!:)


by fardroid23 at August 08, 2016 05:20 AM

July 30, 2016

Suraj Deshmukh

Kubernetes: HorizontalPodAutoScaler and Job

To try out following demos setup your environment as mentioned in here.

Git clone the demos repo:

git clone https://github.com/surajssd/k8s_demos/
cd k8s_demos/

Horizontal Pod Autoscaler

Once you have all the setup done follow the video for demo instructions

Job

Once you have all the setup done follow the video for demo instructions


by surajssd009005 at July 30, 2016 11:53 AM

July 26, 2016

Shakthi Kannan

Deep Learning Conference 2016, Bengaluru

I attended Deep Learning Conference 2016 at CMR Institute of Technology, Bengaluru on July 1, 2016.

Deep Learning Conference 2016 poster

Anand Chandrasekaran, CTO, Mad Street Den began the day’s proceedings with his talk on “Deep learning: A convoluted overview with recurrent themes and beliefs”. He gave an overview and history of deep learning. He also discussed about LeNet, Deep Belief Network by Geoffrey Hinton, Backpropagation Algorithm (1974) by Paul Werbos, and Deep Convolutional Neural Networks (2012) by Alex Net, named after Alex Krizhevsky. Mad Street Den primarily work on computer vision problems. In one of their implementations, they extract 17,000 features from a dress, and provide recommendations to customers. They are one of the early users of NVIDIA GPUs. He also briefed on other deep learning tools like Amazon ML, Torch 7, and Google TensorFlow.

The second talk of the day was a sponsored talk on “Recent advancements in Deep Learning techniques using GPUs” by Sundara R Nagalingam from NVIDIA. He talked on the available GPU hardware and platforms for deep learning available from NVIDIA. It was a complete sales pitch. I did ask them if they have free and open source Linux device drivers for their hardware, but, at the moment they are all proprietary (binary blobs).

After a short tea break, Abhishek Thakur presented on “Applied Deep Learning”. This was one of two best presentations of the day. Abhishek illustrated binary classification and fine tuning. He also briefed on GoogleNet, DeepNet, and ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Deep learning software such as Theano, Lasagne, and Keras were also discussed. A query can be of three types - navigational, transactional, or informational. Word2vec is a two-layer neural net that can convert text into vectors. You can find a large collection of images for input datasets at CIFAR.

The next two sessions were 20-minute each. The first talk was on “Residual Learning and Stochastic Depth in Deep Neural Networks” by Pradyumna Reddy, and the second was on “Expresso - A user-friendly tool for Deep Learning” by Jaley Dholakiya. The Expresso UI needs much work though. I headed early for lunch.

Food was arranged by Meal Diaries and it was delicious!

The post-lunch session began at 1410 IST with Arjun Jain talking on “Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation”. He gave a number of examples on how difficult it is to train models, especially the human body.

Vijay Gabale then spoke on “Deep Dive into building Chat-bots using Deep Learning”. This was the second best presentation of the day. He gave a good overview of chat-bots and the challenges involved in implementing them. There are four building blocks for chat bots - extract intent, show relevant results, contextual interaction and personalization. He also discussed on character-aware neural language models.

I then headed to the BoF session on “Getting Started with Deep Learning”. A panel of experts answered questions asked by the participants. It was suggested to start with toy data and move to big data. Andrew Ng’s Machine Learning course and Reinforcement Learning course were recommended. CS231n: Convolutional Neural Networks for Visual Recognition was also recommended for computer vision problems. Keras and Theano are useful tools to begin with. It is important to not just do a proof-of-concept, but, also see how things work in production. It is good to start to use and learn the tools, and subsequently delve into the math. Having references can help you go back and check them when you have the know-how. Data Nuggets and Kagil are two good sources for datasets. The Kaggle Facial Keypoints Detection (KFKD) tutorial was also recommended. Data science does involve both programming and math. We then headed for a short tea break.

Nishant Sinha, from MagicX, then presented his talk on “Slot-filling in Conversations with Deep Learning”. He gave an example of a semantic parser to fill slots using a simple mobile recharge example. He also discussed about CNN, Elman RNN and Jordan RNN. This was followed by the talk on “Challenges and Implications of Deep Learning in Healthcare” by Suthirth Vaidya from Predible Health. He spoke on the difficulties in dealing with medical data, especially biometric images. Their solution won the Multiple Sclerosis Segmentation Challenge in 2015.

The last talk of the day was on “Making Deep Neural Networks smaller and faster” by Suraj Srinivas from IISc, Bengaluru. He discussed how large model can be mapped to small models using model compression. This involves compressing matrices through four techniques - sparsify, shrink, break, and quantize. The objective is to scale down the solution to run on mobile and embedded platforms, and on CPUs. It was an interesting talk and a number of open research problems exist in this domain.

Overall, it was a very useful one day conference.

July 26, 2016 11:15 AM

July 20, 2016

Suraj Deshmukh

Run aircrack-ng without external “wifi card” [UPDATED]

Note: This is updated version of my previous blog, which goes by the similar title.

home-kali-slider-1
I wanted to use pentesting tools provided in Kali-Linux. I use a Fedora machine as my primary desktop, I can install some of those tools locally, but then I wanted to keep these things separate. So I use Kali Linux in a VM. It was all good, until the point when I was not able to run wireless pentesting tools from VM.

kvmnet_580

This is because VM does not get direct access to the host’s wifi card. The way it works VMs get connected to a bridge setup by your hypervisor via ethernet interface. So VM never deals with how the host is connected to outside world, be it wired or wireless connection.

The VM can get a wireless interface using USB connected wifi device. But then you need to have one to utilize it. To get around this problem, and use your host machine’s interface, we can use containers. Containers give you isolation similar to VM(not exactly) and since container is again a process mapped onto your operating system it has access to everything on your machine(if run in privileged mode) and container can also see the host’s network stack if run with specific flag(--net="host").

So lets get started

Install docker for your system:

Engine

Create Dockerfile which looks like this:

$ cat Dockerfile

FROM kalilinux/kali-linux-docker

RUN apt-get -y update && \
apt-get -y upgrade && \
apt-get install -y aircrack-ng pciutils

Here we are using official kali-linux docker image, then installing tools required.

Create a docker image using above Dockerfile

$ docker build -t mykali .

Now that you have all the bits required to get started, spin up the container:

$ docker run -it --net="host" --privileged --name aircrack mykali bash
root@user:/#

Once inside the container, identify your wireless interface:

# ip a
[SNIP]
3: wlp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 40:f0:2f:57:3d:37 brd ff:ff:ff:ff:ff:ff
inet 10.9.68.109/23 brd 10.9.69.255 scope global dynamic wlp9s0
valid_lft 1373sec preferred_lft 1373sec
inet6 fe80::bf7e:dc5d:337:131c/64 scope link
valid_lft forever preferred_lft forever
[SNIP]


On my machine it is wlp9s0.
Enable monitor mode on that wireless interface.

# airmon-ng start wlp9s0
Your kernel supports rfkill but you don't have rfkill installed.
To ensure devices are unblocked you must install rfkill.
PHY Interface Driver Chipset

phy0 wlp9s0 ?????? Qualcomm Atheros AR9485 Wireless Network Adapter (rev 01)

(mac80211 monitor mode vif enabled for [phy0]wlp9s0 on [phy0]wlp9s0mon)
(mac80211 station mode vif disabled for [phy0]wlp9s0)

Observe the new interface created wlp9s0mon

# ip a
[SNIP]
9: wlp9s0mon: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UNKNOWN group default qlen 1000
link/ieee802.11/radiotap 40:f0:2f:57:3d:37 brd ff:ff:ff:ff:ff:ff

Start capturing raw 802.11 frames on the newly created interface running on monitor mode:

# airodump-ng wlp9s0mon

Let this process continue to run here.

Start another terminal window, we need another bash instance in container

$ docker exec -it aircrack bash
root@dhcp35-70:/#

Now that you have everything setup, start doing stuff here, in this terminal window. If you wanted more softwares in the container, edit Dockerfile above and create image accordingly.

To stop the monitoring mode:

# airmon-ng stop wlp9s0mon
Your kernel supports rfkill but you don't have rfkill installed.
To ensure devices are unblocked you must install rfkill.

PHY Interface Driver Chipset

phy0 wlp9s0mon ?????? Qualcomm Atheros AR9485 Wireless Network Adapter (rev 01)

(mac80211 station mode vif enabled on [phy0]wlp9s0)

(mac80211 monitor mode vif disabled for [phy0]wlp9s0mon)

And, finally, since wireless interface was put to monitoring mode we should stop monitoring before we exit continer. Doing this is important because the Guest OS will not get access to wireless card unless monitoring process by Docker container is not stopped. Now the interface wlp9s0 has appeared back, because airmon-ng was stopped.

# ip a
[SNIP]
8: wlp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 40:f0:2f:57:3d:37 brd ff:ff:ff:ff:ff:ff
inet 10.9.68.109/23 brd 10.9.69.255 scope global dynamic wlp9s0
valid_lft 3581sec preferred_lft 3581sec
inet6 fe80::bf7e:dc5d:337:131c/64 scope link
valid_lft forever preferred_lft forever

Please comment if any doubts.

Ref:


by surajssd009005 at July 20, 2016 06:02 PM

July 07, 2016

Praveen Kumar

Vagrant DNS with Landrush and Virtualbox and dnsmasq

Landrush is pretty neat vagrant plugin if you need a DNS server which is visible to host and guest. For Mac OS it work out of the box but to make it work in Linux we have to make some configuration changes to dnsmasq.

I assume that you are using latest vagrant and Virtualbox for this experiment. If you are using libvirt than please refer to Josef blogpost.

Landrush DNS server runs on port 10053 (localhost) instead of 53 so we have to make entry to redirect requested domain name to our Landrush. Follow below steps and lets configure it.

Install dnsmasq if not present
$ sudo dnf install dnsmasq

Add following to /etc/dnsmasq.conf
listen-address=127.0.0.1

Will create below file which redirect our .vm traffic to Landrush
$ cat /etc/dnsmasq.d/vagrant-landrush
server=/.vm/127.0.0.1#10053

Will start/restart dnsmasq service and check status (should be active)
$ sudo systemctl start dnsmasq.service
$ sudo systemctl status dnsmasq.service
● dnsmasq.service - DNS caching server.
Loaded: loaded (/usr/lib/systemd/system/dnsmasq.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2015-12-24 11:57:47 IST; 2s ago
Main PID: 19969 (dnsmasq)
CGroup: /system.slice/dnsmasq.service
└─19969 /usr/sbin/dnsmasq -k
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com systemd[1]: Started DNS caching server..
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com systemd[1]: Starting DNS caching server....
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: started, version 2.75 cachesize 150
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth DNSSEC ...ct inotify
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 127.0.0.1#10053 for domain vm
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: reading /etc/resolv.conf
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 127.0.0.1#10053 for domain vm
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 10.75.5.25#53
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: using nameserver 10.68.5.26#53
Dec 24 11:57:47 dhcp193-61.pnq.redhat.com dnsmasq[19969]: read /etc/hosts - 2 addresses
Hint: Some lines were ellipsized, use -l to show in full.


Make sure you put '127.0.0.1' as nameserver to /etc/resolve.conf at first place.
$ cat /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8
nameserver 4.4.4.4

Make following change to your vagrant file
$cat Vagrantfile
PUBLIC_ADDRESS=10.1.2.2
PUBLIC_HOST= "your_host.vm"
config.vm.network "private_network", ip: "#{PUBLIC_ADDRESS}"
config.vm.hostname = "#{PUBLIC_HOST}"
config.landrush.enabled = true
config.landrush.host_ip_address = "#{PUBLIC_ADDRESS}"
config.landrush.tld = ".vm"
confg.landrush.guest_redirect_dns = false

$ vagrant landrush ls
your_host.vm 10.1.2.2
2.2.1.10.in-addr.arpa your_host.vm

$ sudo netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN 2946/dropbox
tcp 0 0 10.65.193.61:44319 0.0.0.0:* LISTEN 14810/weechat-curse
tcp 0 0 127.0.0.1:17600 0.0.0.0:* LISTEN 2946/dropbox
tcp 0 0 127.0.0.1:45186 0.0.0.0:* LISTEN 433/GoogleTalkPlugi
tcp 0 0 127.0.0.1:39715 0.0.0.0:* LISTEN 433/GoogleTalkPlugi
tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 2946/dropbox
tcp 0 0 0.0.0.0:10053 0.0.0.0:* LISTEN 14966/ruby-mri
tcp 0 0 127.0.0.1:2222 0.0.0.0:* LISTEN 15200/VBoxHeadless
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 16871/dnsmasq
tcp 0 0 192.168.121.1:53 0.0.0.0:* LISTEN 16817/dnsmasq
tcp 0 0 192.168.124.1:53 0.0.0.0:* LISTEN 16810/dnsmasq
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 2647/cupsd
tcp6 0 0 ::1:631 :::* LISTEN 2647/cupsd

$ ping your_host.vm
PING your_host.vm (10.1.2.2) 56(84) bytes of data.
64 bytes from 10.1.2.2: icmp_seq=1 ttl=64 time=0.332 ms
64 bytes from 10.1.2.2: icmp_seq=2 ttl=64 time=0.238 ms


by Praveen Kumar (noreply@blogger.com) at July 07, 2016 12:10 PM