Planet dgplug

December 03, 2016

Kushal Das

Communication tools and submitting weekly reports

I work for Fedora Engineering team. There are around 17 people in my team, and we cover Australia to USA geographically. Most of us are remote, only a hand few members go to the local offices. I did a blog post around 3 years back just after I started working remotely. In this post, I am trying to write down my thoughts about our communication styles.

Communication Tools

IRC is our primary communication medium. I am in around 42 channels dedicated to various sub-projects inside Fedora. We have a few dedicated meeting channels. As all meetings involve community members, the meeting timings are based on the availability of many other people. This is the only thing which is difficult as I have many meetings which go after midnight. Any other discussion where we need more participation and also to keep a record, we use our mailing lists.

A few of us also use various video chat systems regularly. It is always nice to see the faces. As a team, we mostly meet once during Flock, and some get a chance to meet each other during devconf.

Weekly reports

All of our team members send weekly work status updates to the internal mailing list. Sometimes I lag behind in this particular task. I tried various different approaches. I do maintain a text file on my laptop where I write down all the tasks I do. A Python script converts it into a proper email (with dates etc.) and sends it to the mailing list. The problem of using just a plain text file is that if I miss one day, I generally miss the next day too. The saga continues in the same way. Many of my teammates use taskwarrior as TODO application. I used it for around 6 months. As a TODO tool it is amazing, I have a script written by Ralph, which creates a very detailed weekly report. I was filling in my text file by copy/pasting from the taskwarrior report. Annotations were my main problem with taskwarrior. I like updating any TODO note on a GUI (web/desktop) much more, than any command line tool. In taskwarrior, I was adding, and ending tasks nicely, but was not doing any updates to the tasks.

I used Wunderlist few years back. It has very a nice WEB UI, and also a handy mobile application. The missing part is the power of creating reports. I found they have nice open source libraries to interact with the API services, both in Python and also in golang. I have forked wunderpy2, which has a patch for getting comments for any given task (I will submit a PR later). Yesterday I started writing a small tool using this library. It prints the report on STDOUT in Markdown format. There is no code for customization of the report yet, I will be adding them slowly. My idea is to run this tool, pipe the output to my report text file. Edit manually if I want to. Finally, I execute my Python script to send out the report. In the coming weeks I will be able to say how good is this method.

by Kushal Das at December 03, 2016 04:47 PM

December 02, 2016

Kushal Das

Atomic Working Group update from this week's meeting

Two days back we had a very productive meeting in the Fedora Atomic Working Group. This post is a summary of the meeting. You can find all the open issues of the working group in this Pagure repo. There were 14 people present at the meeting, which happens on every Wednesday 5PM UTC at the #fedora-meeting-1 channel on Freenode IRC server.

Fedora 26 change proposal ideas discussion

This topic was the first point of discussion in the meeting. Walters informed that he will continue working on the Openshift related items, mostly the installer, system containers etc, and also the rpm-ostree. I have started a thread on the mailing list about the change idea, we also decided to create a wiki page to capture all the ideas.

During the Fedora 25 release cycle, we marked couple of Autocloud tests as non-gating as they were present in the system for some time (we added the actual tests after we found the real issue). Now with Fedora 25 out, we decided to reopen the ticket and mark those tests back as gating tests. Means in future if they fail, Fedora 25 Atomic release will be blocked.

The suggestion of creating the rkt base image as release artifact in Fedora 26 cycle, brought out some interesting discussion about the working group. Dusty Mabe mentioned to fix the current issues first, and then only jump into the new world. If this means we will support rkt in the Atomic Host or not, was the other concern. My reaction was maybe not, as to decide that we need to first coordinate between many other teams. rkt is packaged in Fedora, you can just install it in a normal system by the following command.

$ sudo dnf install rkt -y

But this does not mean we will be able to add support for rkt building into OSBS, and Adam Miller reminded us that will take a major development effort. It is also not in the road map of the release-infrastructure team. My proposal is to have only the base image build officially for rkt, and then let our users consume that image. I will be digging more on this as suggested by Dusty, and report back to the working group.

Next, the discussion moved towards a number of technical debts the working group is carrying. One of the major issue (just before F25 release) was about missing Atomic builds, but we managed to fix the issue on time. Jason Brooks commented that this release went much more promptly, and we are making a progress in that :) Few other points from the discussion were

  • Whether the Server working group agreed to maintain the Cloud Base image?
  • Having ancient k8s is a problem.
  • We will start having official Fedora containers very soon.

Documentation

Then the discussion moved to documentation, the biggest pain point of the working group in my mind. For any project, documentation can define if it will become a success or not. Users will move on unless we can provide clearly written instructions (which actually works). For the Atomic Working Group, the major problem is not enough writers. After the discussion in the last Cloud FAD in June, we managed to dig through old wiki pages. Trishna Guha is helping us to move them under this repo. The docs are staying live on https://fedoracloud.rtfd.io. I have sent out another reminder about this effort to the mailing list. If you think of any example which can help others, please write it down, and send in a pull request. It is perfectly okay if you publish it in any other document format. We will help you out to convert it into the correct format.

You can read the full meeting log here.

by Kushal Das at December 02, 2016 06:09 AM

December 01, 2016

Anwesha Das

micro:bit : a round around the Sun

"The future of the world is in my classroom today"- Ivan W. Fitzwater. To shape the world we need to give the children the correct environment to learn new things. BBC micro:bit is such a project. It has been a year since micro:bit was launched in UK. This tiny device was launched with an aim to train young minds to think, create and of course code.

What is this fun-size piece?

micro:bit is an ARM powered embedded board. This cookie sized computer (4 × 5 cm) has two input buttons, 5*5 led lights, two microcontroller processors, with one allowing the device to work as a USB on a personal computer irrespective of the operating system . The bigger chip in the upper left of the device is the ARM Cortex- M0 processor with a 416 KiB of memory. For performing Physics experiments it has a digital compass, accelerometer. USB or external battery backup can be used for powering micro:bit. Other features like Bluetooth connectivity makes the device so cool.

From Python to MicroPython

MicroPython is Python for microcontrollers, other small sized and confined hardware. It is an implementation of Python 3, where it behaves like Python but does not use the CPython source code. It is written from the scratch. MicroPython comes with a useful subset of the Python standard library, and is published under the permissive MIT license.

Damien George : the man behind MicroPython

In 2013 Damien George started his project of shrinking the language Python. As it could run on small devices. He started MicroPython as a Kickstarter project. It took him almost 6 months of time to realize that it is a workable idea. Only after he had written Python compiler as thin as it could be compressed into a RAM of 128 kilobytes. This "Genuinely Nice Chap" was awarded the PSF's Community
Service Award
in March 2016.

The Era of Digital creativity:

“I never teach my pupils, I only attempt to provide the conditions in which they can learn.” Albert Einstein. For a long time, the BBC has been endeavoring to provide fyouth with good conditions to help
them learn, especially around technology. They started this mission with the launch of the BBC Micro, as series of microcomputers in the 1980's. The latest effort in that mission was the micro:bit project. The BBC started with the massive aspiration of giving these pocket size fun machines, micro:bit, to a million 11 and 12 year old children in the UK. An amazing aim indeed, motivating a generation to think,to visualize and to code. Children will be able to make their fantasies come true on this blank slate.

PSF joining the mission:

BBC found that Python is the good choice for micro:bit. Python is well known as a relatively easy language to learn, and is especially popular for
teaching children. The PSF is one of the key partners in this project, as the micro:bit is not only a fascinating education use case, but also illustrates Python's utility in an area not normally associated with
the language, embedded systems programming.
Whenever we think of an embedded device the ideas like 'low level programming', 'coding in the language C ' just pop in our minds. But that makes things difficult, needs expertise and is of course not very suitable for 11 or 12 year olds. Here MicroPython came to help. With MicroPython people can interact with the hardware in a lucid manner.

Nicholas Tollervey, our very own ntoll is the guy who was acting as the bridge between PSF and the project. It is majorly for his efforts, now MicroPython is running on micro:bit (followed by his much jumping and shouting of 'woo hoo'). The following is what the father of MU code editor, wants to say about his journey with micorbit.

What made you interested in the project?

I was relatively well known in UK Python circles as being interested in education (I created and organize the PyCon UK education track for
teachers and kids). A friend heard about the BBC's request for partners
in a programming-in-education project called "Make it Digital" and
suggested I take a look. The BBC's request for partners actually
mentioned Python! That was at the end of 2014.

When did you get involved with it?

I took the BBC's request for partners and, with the permission of the PSF board, put together a proposal for the PSF to
become a partner.

At this stage, all I knew was the project included programming and a mention of Python. Given this mention and Python's popularity as a
teaching language I felt it important that the Python community had the
opportunity to step up and get involved.

In January 2015 the BBC invited the PSF to join a partnership relating to the mysterious "Make it Digital" project. We had to sign NDA
agreements and it was only then that I learned of the plans for the BBC
micro:bit device.

How does this project helped/changed education system in UK?

Every 11-12 year old in the UK should have one in their possession by now. A huge number of resources have been made available, for free, for
teachers and learners to learn about programming. Lots of the partner
organisations have become involved in delivery of educational resources.

From a Python specific perspective, between 80-100 of the UK's Python developers have been involved in either writing code for kids, creating
tools for kids to use, turning up at teach-meets to work with teachers
and building cool educational projects for people to use in the classroom.

It's too early to tell what the impact has been. However, in ten years time I'll know it's been a success if I interview a new graduate
programmer and they say they got started with the micro:bit. :-)

How did PSF got involved with this project?

The partnership was between organisations. A rag-tag band of volunteers from the UK's Python community was not an option - ergo, my acting as a
PSF Fellow on behalf of the PSF.

This was actually quite useful since a large number of the volunteers got involved because they would be acting under the auspices of the PSF.
It's an obvious way to motivate people into giving something back to the
community - run it as a PSF project.

What was PSF's role/ how did PSF helped the project?

The original role of the PSF was to create educational resources, offer Python expertise and provide access to events such as PyCon UK through
which teachers could be reached. The BBC explained that another partner
was actually building the Python software for the device.

The complete story is told in this blog post:

http://ntoll.org/article/story-micropython-on-microbit

What are the changes that the project had brought?

Well, I believe it has brought the MicroPython project to the attention of many people in the education world and wider Python community. I also
believe it has brought educational efforts to the attention of programmers.

Education is important. It's how we decide what our community is to become through our interaction with our future colleagues, friends and
supporters.

micro:bit in PyCon UK 2016

This year's Pycon UK took place, from 15th to 19th September, 2016, in Cardiff. PSF as a part of its prime mission "to promote the programming language Python" sponsored Python conferences around the globe. "They are very generous sponsors", was what ntoll had to say about the
role of the PSF in PyCon UK.

To teach the students one has to educate the teachers. A lot of preparatory work had been done before actually distributing the micro:bits including the changes made in the CS curriculum of the students. About his plan in Pycon UK Ntoll ran several workshops for teachers and another for kids as part of
the education track. They had distributed these fun machines (almost 400) to the attendees.

Presently there is a lot of micro-fun and micro-love going on in PyCon UK

Present status of micro:bit

One of the major and primary reason for PSF joining the mission was that the project is open source. Both the software and hardware designs have released under open licenses. Now as the designs are open and available for the mass, anyone and everyone can remake these fun pieces. Microbit Foundation has been created and PSF is a part of it

Renaissance in making : join the movement

micro:bit has thousands possibilities hidden in it. People are exploring them, drawing their dreams with micro:bit. The following are some cool projects done with microbit:

A keyboard constructed just by affixing a simple buzzer to the micro:bit.

Our childhood [game of snakes] (https://twitter.com/HolyheadCompSci/status/750242493996957696) recreated with the help of micro:bit, MicroPython.

An effective clap controlled robot built with the help of micro:bit, MicroPython .

"Knowledge is power". Nowadays, heroes don't come with a sword, but with a micro:bit in their hands. So, if you want to learn, have fun and be a part of the mission grab your own micro:bit and start coding.

by Anwesha Das at December 01, 2016 03:58 PM

November 26, 2016

Shakthi Kannan

Functional Conference 2016, Bengaluru

I attended Functional Conf 2016 at Hotel Chancery Pavilion, Bengaluru between October 13-16, 2016. The conference was on October 14-15, 2016 and there were pre- and post-conference workshops.

After arriving early on the day of the workshop, I checked-in to my hotel accommodation. A view of the Kanteerva stadium from the hotel.

Kanteerva Stadium

Pre-Conference Workshop

I had registered for the “Deep Dive into Erlang Ecosystem” workshop by Robert Virding, one of the creators of the Erlang programming language. He started the day’s proceedings with an introduction to Erlang basics and covered both sequential and concurrent programming. He also gave an overview of the Open Telecom Platform (OTP) and answered a number of questions from the participants. He, along with Joe Armstrong and Mike Williams, designed the Erlang programming language for telecommunication, keeping the system in mind and all the way from the ground-up.

He also mentioned how WhatsApp was able to handle two million concurrent connections on a single box, and they would peak at three million at times. As another Emacs and Lisp user, he wrote Lisp Flavoured Erlang (LFE). He did not have much time to talk about it during the workshop, but, he did share differences between Erlang, Elixir and other languages that are being built around the Erlang ecosystem.

Day I

Robert Virding

The keynote of the day was from Robert Virding on “The Erlang Ecosystem”. He gave a good overview and history of the Erlang programming language, and the rationale for designing the same. He elaborated on the challenges they faced in the early days of computing, and the first principles that they had to adhere to. They did not intend the language to be functional, but, it turned out to be so, and greatly helped their use case. One of the beautiful expressions in Erlang to represent bit-level protocol formats in an expressive format is shown below:

<<?IP_VERSION:4, HLen:4, SrvcType:8, TotLen:16, 
      ID:16, Flgs:3, FragOff:13,
      TTL:8, Proto:8, HdrChkSum:16,
      SrcIP:32,
      DestIP:32, RestDgram/binary>>

Robert’s keynote was followed by another keynote by Brian McKenna on “No Silver Bullets in Functional Programming”. He gave the pros and cons of using Functional and other programming paradigms, and discussed the trade-offs. A number of code examples were shown to illustrate the concepts.

The next talk that I attended was by Aloïs Cochard on “Welcome to the Machines”. He gave an overview on the history of various Haskell libraries for data stream processing (pipes, conduit) and finally provided a tutorial on machines.

Abdulsattar Mohammed introduced the need for dependent types using Idris with simple examples in his “Dependently Typed Programming with Idris” talk. The concepts were well narrated with numerous code snippets.

The next talk by Debasish Ghosh on “An algebraic approach to functional domain modeling” was a modelling exercise on how to map business logic into functional algebra. He demonstrated a real world step-by-step process on the transformation from a problem domain to the solution domain consisting of algebraic data types, functions that operate on them, and business rules.

Ravi Mohan started his talk titled, “Equational Reasoning - From Code To Math and Back Again”, with his learning in the Functional Programming (FP) world, and an overview of how to go about reasoning from code to math. His laptop had ran out of battery power, and he did not have his laptop charger. Before his scheduled talk, he had re-created plain text notes of his slides and walked us through the content.

“Implementing Spark like system in Haskell” was an interesting session by Yogesh Sajanikar on his attempt to create a DSL for map-reduce jobs. He did cover much of the internals in his implementation and the challenges faced. The hspark code is available at https://github.com/yogeshsajanikar/hspark.

Day II

The second day began with the keynote by John Hughes on “Why Functional Programming Matters”. This was the best keynote of the conference, where John gave a very good historical perspective of FP and the experiences learnt in the process. His slide deck was excellent and covered all the necessary points that were part of his famous paper with the same title.

This was followed by a series of demos on cool features in Functional Programming languages - Erlang, Idris, APL, F# and Julia.

“Using F# in production: A retrospective” was a talk by Ankit Solanki on the lessons learned in using a functional language in implementing a tax e-filing application. They heavily use F# Type Providers to handle the variation in input CSV files.

“Real world functional programming in Ads serving” was a talk by Sathish Kumar from Flipkart on how they used functional programming in Java 8 for their product. They initially prototyped with Haskell, and used the constructs in Java.

I skipped the next talks, and spent time with Robert Virding in the Erlang booth.

Rethinking “State Management.” was presented by Tamizhvendan S. He narrated examples on state management for a cafe application using F#. He also gave a demo of Ionide text editor and its features.

Post-conference workshop

I attended John Hughes workshop on Property-based Testing. Initially, I thought he would be using Haskell QuickCheck, but, in the workshop he used the Erlang implementation. John mentioned that the Haskell and Erlang implementations are different, and their interests have diverged.

John Hughes

He started the workshop by taking an example of writing property tests for encoded SMS messages using Erlang. He also demonstrated on how a minimal test example is produced when a test fails. The choice of deciding on what properties to test is still an active research problem. He also demonstrated how to collect statistics from the test results to analyse and improve them.

The property-based testing has been used by his company, QuviQ, to test C protocols for the automobile industry. They were able to generate tests to detect bugs in the CAN bus implementation. Here is a summary of the statistics for a project:

3,000 pages of specification
20,000 lines of QuickCheck
1,000,000 LoC, 6 suppliers
200 problems
100 problems in the standard

He also shared his experience in generating tests for Klarna - an invoicing service web shop that uses Mnesia - the distributed Erlang database. He concluded by saying that we should not write tests, but, they shoud be generated.

Overall, the workshops were quite useful. It was good to have met both Robert Virding and John Hughes.

November 26, 2016 04:45 PM

Sayan Chowdhury

PyCon India 2016

PyCon India, this year was held at New Delhi at the JNU Convention Center.

Dev Sprints

During the Dev Sprint, Farhaan and Vivek were sprinting on Fedora Infrastructure projects primarily helping people contribute to Pagure.

Other projects/orgs like SciPy, Red Hat team, FOSSAsia, Junction etc were also sprinting.

The Dev Sprint turned out to have a good participation and couple of PRs were sent out by the participations. More than that, it’s more about participants getting to know about on how to contribute.

Red Hat Booth

Red Hat being a PyCon India sponsor did set-up a booth. Praveen, Suraj, Trishna, Ganesh, Rupali were talking to people, explaining them about different topics ranging from contributing to open source to products & services Red Hat provides.

I did stumble upon the booth a couple of times and helped them out to know how to contribute the Fedora Infrastructure project.

The booth was shared with PyLadies Pune (Kudos to Rupali, for letting them share the booth)

Talks

I mostly spend my time at the hallway talking/discussing stuffs with people. But I do attend the interesting talks.

I attended the talk by @rtnrpo and @bamachrn. Sadly both their talks were in the same slot.

@bamachrn explained the complete CentOS Community Container Pipeline.

@rtnpro talked on how we went ahead and built ircb by implementing Realtime microservices with server-side Flux.

Recently, I have been trying to work on my knowledge on licenses. @anwesha talk was one good talk which gave an idea on why to use a license, which license to choose when and other pros and cons.

DGPLUG Annual Meet

Every year at PyCon India, we have our annual DGPLUG meet at the conference, where people from different part of the country and we can meet each other once a year. We discussed what they gained from this year’s training. What’s wrong? What’s stopping them from contributing. We were joined by Sartaj, he shared valuable ideas and how to work on thinking process to contribute.

PyLadies

This year PyCon India had a presence of Pyladies. During the Open Space session, there was a open discussion about PyLadies and where Jeff Rush, Van Lindberg, Paul Everitt, Dmitry Filippov joined to share their experience in community.

Btw, we did have a really nice PyLadies umbrella as prop at the PyLadies booth.

Check the Flickr Album for pictures

November 26, 2016 09:40 AM

November 02, 2016

Runa Bhattacharjee

Learning yet another new skill

About 3 weeks ago when the autumn festival was in full swing, away from home, in Bangalore I made my way to a maker space nearby to spend a weekend learning something new. In addition to the thought of spending a lonely weekend doing something new, I was egged on by a wellness initiative at my workplace that encouraged us to find some space away from work. I signed up for a 2-day beginner’s carpentry workshop.

 

workfloor
When I was little, I often saw my Daddy working on small pieces of wood with improvised carving tools to make little figurines or cigarette holders. The cigarette holders were lovely but they were given away many years ago, when he (thankfully) stopped smoking. Some of the little figurines are still around the house, and a few larger pieces made out of driftwood remain in the family home. However, I do not recall him making anything like a chair or a shelf that could be used around the house. In India, it is the norm to get such items made, but by the friendly neighborhood carpenter. Same goes for many other things like fixing leaking taps, or broken electrical switches, or painting a room. There is always someone with the requisite skills nearby who can be hired. As a result, many of us lack basic skills in these matters as opposed to people elsewhere in the world.

 

I did not expect to become an expert carpenter overnight, and hence went with hope that my carpentry skills would improve from 0 to maybe 2, on a scale of 100. The class had 3 other people – a student, a man working in a startup, and a doctor. The instructor had been an employee at a major Indian technology services company, and now had his own carpentry business and these classes. He had an assistant. The space was quite large (the entire ground floor of the building) and had the electronics lab and woodwork section.

 

We started off with an introduction to several types of soft and hardwood, and plywoods. Some of them were available in the lab as they were going to be used during the class, or were stored in the workshop. Rarer wood like mahogany, and teak  were displayed using small wooden blocks. We were going to use rubber wood, and some plywood for our projects. Next, we were introduced to some of the tools – with and without motors. We learnt to use the circular saw, table saw, drop sawjigsaw, power drill and wood router. Being more petite than usual and unaccustomed to such tools, the 400-600w saws were quite terrifying for me at the beginning.

 

clock
The first thing I made was a wall clock shaped like the beloved deer – Bambi. On a 9”x 9” block of rubber wood, I first traced the shape. Then used a jigsaw to cut off the edges and make the shape. Then used the drill to make some holes and create the shapes for eyes and spots. The sander machine was eventually used to smoothen the edges. This clock is now proudly displayed on a wall at my Daddy’s home very much like my drawings from age 6.

 

shelfNext, we made a small shelf with dado joints that can be hung up on the wall. We started off with a block of rubber wood about 1’6’’ x 1’. The measurements for the various parts of this shelf was provided on a piece of paper and we had to cut the pieces using the table saw, set to the appropriate width and angle. The place where the shelves connected with the sides were chiseled out and smoothed with a wood router. The pieces were glued together and nailed. The plane and sander were used to round the edges.

 

The last project for the day was to prepare the base for a coffee table. The material was a block of  pinewood 2 inches thick and 2’ x  1’. We had to first cut these blocks from a bigger block, using the circular saw. Next, these were taken to the table saw to make 5 long strips of 2 inch width. 1 of these strips had about 1/2 inch from the edges narrowed down into square-ish pegs to fit into the legs of the table. The legs had some bits of the center hollowed out to be glued together into X shapes. These were left overnight to dry and next morning, with a hammer and chisel, the holes were made into which the pegs of the central bar could be connected. Finally, the drop saw was used to chop off the edges to make the table stand correctly. I was hoping to place a plywood on top of this base to use as a standing desk. However, it may need some more chopping to be made into the right height.

 

trayThe final project was an exercise for the participants to design and execute an item using a 2’ x 1’ piece of plywood. I chose to make a tray with straight edges using as much of the plywood I could. I used the table saw to cut the base and sides. The smaller sides were tapered down and handles shaped out with a drill and jigsaw. These were glued together and then nailed firmly in place.

 

By the end of the 2nd day, I felt I was more confident handling the terrifying, but surprisingly safe, pieces of machinery. Identifying different types of wood or making an informed decision when selecting wood may need more practise and learning. The biggest challenge that I think I will face if I had to do more of this, is of workspace. Like many other small families in urban India, I live in an apartment building high up the floors, with limited space. This means that setting up an isolated area for a carpentry workbench would not only take up space, but without an enclosure it will cause enough particle matter to float around a living area. For the near future, I expect to not acquire any motorized tools but get a few manual tools that can be used to make small items (like storage boxes) with relative ease and very little disruption.

by runa at November 02, 2016 08:09 AM

October 25, 2016

Trishna Guha

Containerization and Deployment of Application on Atomic Host using Ansible Playbook

This article describes how to build Docker image and deploy containerized application on Atomic host (any Remote host) using Ansible Playbook.

Building Docker image for an application and run container/cluster of containers is nothing new. But the idea is to automate the whole process and this is where Ansible playbooks come in to play.

Note that you can use Cloud/Workstation based Image to execute the following task. Here I am issuing the commands on Fedora Workstation.

Let’s see How to automate the containerization and deployment process for a simple Flask application:

We are going to deploy container on Fedora Atomic host.

First, Let’s Create a simple Flask Hello-World Application.

This is the Directory structure of the entire Application:

flask-helloworld/
├── ansible
│   ├── ansible.cfg
│   ├── inventory
│   └── main.yml
├── Dockerfile
└── flask-helloworld
    ├── hello_world.py
    ├── static
    │   └── style.css
    └── templates
        ├── index.html
        └── master.html

hello_world.py

from flask import Flask, render_template

APP = Flask(__name__)

@APP.route('/')
def index():
    return render_template('index.html')

if __name__ == '__main__':
    APP.run(debug=True, host='0.0.0.0')

static/style.css

body {
  background: #F8A434;
  font-family: 'Lato', sans-serif;
  color: #FDFCFB;
  text-align: center;
  position: relative;
  bottom: 35px;
  top: 65px;
}
.description {
  position: relative;
  top: 55px;
  font-size: 50px;
  letter-spacing: 1.5px;
  line-height: 1.3em;
  margin: -2px 0 45px;
}

templates/master.html

<!doctype html>
<html>
<head>
    {% block head %}
    <title>{% block title %}{% endblock %}</title>
    {% endblock %}
    												<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs7" crossorigin="anonymous">
    												<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-T8Gy5hrqNKT+hzMclPo118YTQO6cYprQmhrYwIiQ/3axmI1hQomh7Ud2hPOy8SP1" crossorigin="anonymous">
    												<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
    												<link href='http://fonts.googleapis.com/css?family=Lato:400,700' rel='stylesheet' type='text/css'>

</head>
<body>
<div id="container">
    {% block content %}
    {% endblock %}</div>
</body>
</html>

templates/index.html

{% extends "master.html" %}

{% block title %}Welcome to Flask App{% endblock %}

{% block content %}
<div class="description">

Hello World</div>
{% endblock %}

Let’s write the Dockerfile.

FROM fedora
MAINTAINER Trishna Guha<tguha@redhat.com>

RUN dnf -y update && dnf -y install python-flask python-jinja2 && dnf clean all
RUN mkdir -p /app

COPY files/ /app/
WORKDIR /app

ENTRYPOINT ["python"]
CMD ["hello_world.py"]

Now we will work on Ansible playbook for our application that deals with the automation part:

Create inventory file:

[atomic]
IP_ADDRESS_OF_HOST ansible_ssh_private_key_file=<'PRIVATE_KEY_FILE'>

Replace IP_ADDRESS_OF_HOST with the IP address of the atomic/remote host and ‘PRIVATE_KEY_FILE’ with your private key file.

Create ansible.cfg file:

[defaults]
inventory=inventory
remote_user=USER

[privilege_escalation]
become_method=sudo
become_user=root

Replace USER with the user of your remote host.

Create main.yml file:

---
- name: Deploy Flask App
  hosts: atomic
  become: yes

  vars:
    src_dir: [Source Directory]
    dest_dir: [Destination Directory]

  tasks:
    - name: Create Destination Directory
      file:
       path: "{{ dest_dir }}/files"
       state: directory
       recurse: yes

    - name: Copy Dockerfile to host
      copy:
       src: "{{ src_dir }}/Dockerfile"
       dest: "{{ dest_dir }}"

    - name: Copy Application to host
      copy:
       src: "{{ src_dir }}/flask-helloworld/"
       dest: "{{ dest_dir }}/files/"

    - name: Make sure that the current directory is {{ dest_dir }}
      command: cd {{ dest_dir }}

    - name: Build Docker Image
      command: docker build --rm -t fedora/flask-app:test -f "{{ dest_dir }}/Dockerfile" "{{ dest_dir }}"

    - name: Run Docker Container
      command: docker run -d --name helloworld -p 5000:5000 fedora/flask-app:test
...

Replace [Source Directory] in src_dir field in main.yml with your /path/to/src_dir of your current host.

Replace [Destination Directory] in dest_dir field in main.yml with your /path/to/dest_dir of your remote atomic host.

Now simply run $ ansible-playbook main.yml :).  To verify if the application is running issue this command $ curl http://localhost:5000 on your atomic/remote host.

You can also manage your containers running on remote host using Cockpit. Check this article to know how to use Cockpit to manage your containers: https://fedoramagazine.org/deploy-containers-atomic-host-ansible-cockpit

fotoflexer_photo

screenshot-from-2016-10-21-18-52-45

Here is the repository of the above example:  https://github.com/trishnaguha/fedora-cloud-ansible/tree/master/examples/flask-helloworld

My future post will be related to ansible-container where I will describe how we can build Docker image and orchestrate container without writing any Dockerfile🙂.


by Trishna Guha at October 25, 2016 10:52 AM

October 20, 2016

Farhaan Bukhsh

PyCon India 2016

Day 0

“This is awesome!”, this was my first reaction when I boarded my first flight to Delhi. I was having trouble in finding a proper accommodation Kushal, Sayan and Chandan helped me a lot in that part, I finally got honour of  bunking with Sayan , Subho and Rtnpro which I will never forget. So, I landed and directly went to JNU convention center. I met the whole  Red Hat intern gang . It was fun to meet them all. I had proposed Pagure for Dev Sprint and I pulled in Vivek to do the same.

The dev sprint started and there was no sign of Vivek or Saptak, Saptak is FOSSASIA contributor and Vivek  contributes to Pagure with me. Finally it was my turn to talk about Pagure on stage , it was beautiful  the experience and the energy.  We got a lot of young and new contributors and we tried to guide them and make them send at least one PR.  One of them was lucky enough to actually make a PR and it got readily merged.

I met a lot of other contributors and other mentors and each and every project was simply amazing. I wish I could help all of them some day. We also met Paul, who writes code for PyCharm, we had a nice discussion over Vim v/s PyCharm.

Finally the day ended with us Vivek, Sayan , Subho  , Saptak and me going out to grab some dinner. I bunked with Sayan and Subho and we hacked all night. I was configuring my Weechat and was trying all the plugins available and trust me there are a lot of them.

Day 1

I was a session chair in one of the lecture room and it was a crazy experience from learning to write a firmware for a drone, using generators to write multi-threaded program and also to work with salt stack. The food was really good but the line for food was equally “pythonic” as the code should be.

There were a lot of stalls put up and I went to all of them and had a chat with them. My favorite one was PyCharm because Paul promised me to teach me some neat tricks to use PyCharm.

The Redhat and Pyladies booth were also there which also were very informative and they were responsible making people aware about certain social issues and getting women in tech.

We had two keynotes on this day one by BG and the other by VanL and trust me both of the keynotes were so amazing the they make you look technology from a different view point altogether.

One of the amazing part of such conferences are Open Space and Lightning talks. There are few open spaces which I attended and I found them really enthralling. I was waiting for the famous Stair case meeting of Dgplug.  We met Kushal’s mentor, Sartaj and he gave a deep insight in what and why we should contribute to open source. He basically told us that even if one’s code is not used by anyone he will still be writing code for the love of doing it.

After this we went for Dgplug/Volunteers  dinner at BBQ nation, it was an eventful evening😉 to be modest.

Day 2 

The last day of conference I remember myself wondering how a programming language translates into philosophy and how that philosophy unites a diverse nation like India. The feeling was amazing but I could sense the sadness. The sadness of parting from friends who meet once in an year. I could actually now relate all IRC nicks with their faces. It just brings a lot more on the table.

At last we all went to the humdrum of our normal life with the promise to meet again. But I still wonder how a technology bring comradeship between people from all nook and corners of life. How it relates from a school teacher to a product engineer . T his makes  me feel that this is more than just a programming language , this is that unique medium that unites people and give them power to make things right.

With this thought fhackdroid signs out!

Happy Hacking!


by fardroid23 at October 20, 2016 03:31 PM

October 18, 2016

Shakthi Kannan

GNU Emacs - News Reader

In this next article in the GNU Emacs series, we shall learn how to use GNU Emacs as a news reader.

Elfeed

Elfeed is an Emacs web feed reader that is extensible and supports both Atom and RSS feeds. It has written by Christopher Wellons.

Installation

We shall use Milkypostman’s Experimental Lisp Package Archive (MELPA) to install Elfeed. Create an initial GNU Emacs start-up file that contains the following:

(require 'package) ;; You might already have this line
(add-to-list 'package-archives
             '("melpa" . "https://melpa.org/packages/"))

(when (< emacs-major-version 24)
  ;; For important compatibility libraries like cl-lib
  (add-to-list 'package-archives '("gnu" . "http://elpa.gnu.org/packages/")))
(package-initialize) ;; You might already have this line

The above code snippet has been taken from the MELPA project documentation website http://melpa.org/#/getting-started, and has been tested on GNU Emacs 24.5.2.

You can now start GNU Emacs using the following command:

$ emacs -Q -l ~/elfeed-start.el

You can obtain the list of available packages using M-x list-packages, which will search the melpa.org and elpa.gnu.org repositories. You can search for ‘elfeed’ in this buffer, and select the same for installation by pressing the ‘i’ key. To actually install the package, press the ‘x’ (execute) key, and Elfeed will be installed in ~/.emacs.d/elpa directory.

Configuration

You can create a shortcut to start Elfeed using the following code snippet in your ~/elfeed-start.el file.

(global-set-key (kbd "C-x w") 'elfeed)

The list of feeds can be defined as shown below:

(setq elfeed-feeds
      '(("http://www.shakthimaan.com/news.xml" people)
        ("http://arduino.cc/blog/feed/" projects)
        ("http://planet-india.randomink.org/rss10.xml" people planet)
        ))

Tags can be added at the end of the feed. The above feeds include ‘people’, ‘projects’ and ‘planet’ tags.

Usage

You can use the C-x w shortcut to start Elfeed. If you press ‘G’, it will fetch the latest news feeds from the servers, starting with the message ‘3 feeds pending, 0 in process …’. A screenshot of Elfeed in GNU Emacs is shown below:

Elfeed

The RSS entries are stored in ~/.elfeed directory on your system.

You can read a blog entry by pressing the ‘Enter’ key. If you would like to open an entry in a browser, you can use the ‘b’ key. In order to copy the selected URL entry, you can use the ‘y’ key. To mark an entry as read, you can use the ‘r’ key, and to unmark an entry, press the ‘u’ key. You can add and remove tags for an entry using the ‘+’ and ’-’ keys, respectively.

You can also filter the feeds based on search critera. Pressing ’s’ will allow you to update the filter that you want to use. There are many filter options available. You can use ‘+’ to indicate that a tag must be present, and ’-’ to indicate that the tag must be absent. For example, “+projects -people”.

The filter text starting with ‘@’ represents a relative time. It can contain plain English text combined with dashes – for example, ‘@1-month-ago +unread’. The ’!’ notation can be used to negate a filter. To limit the number of entries to be displayed, you can use the ‘#’ pattern. For example, ‘+unread #5’ will list five unread blog articles. A screenshot of Elfeed with a filter applied is shown in the following figure:

Elfeed filter

You can also use regular expressions as part of your filter text. The default search filter can be changed by modifying the value of elfeed-search-filter. For example:

(setq-default elfeed-search-filter "@1-month-ago +unread")

The search format date can be customized as shown below:

(defun elfeed-search-format-date (date)
  (format-time-string "%Y-%m-%d %H:%M" (seconds-to-time date)))

Elfeed also has an export option to view the feeds in a browser. If you install the elfeed-web package from the packages list, you can then start it using M-x elfeed-web-start. You can then start a browser, and open http://localhost:8080/elfeed/ to view the feeds. A screenshot is shown below:

Elfeed web

The entire contents of the elfeed-start.el configuration file are shown below:

(require 'package) ;; You might already have this line
(add-to-list 'package-archives
             '("melpa" . "https://melpa.org/packages/"))

(when (< emacs-major-version 24)
  ;; For important compatibility libraries like cl-lib
  (add-to-list 'package-archives '("gnu" . "http://elpa.gnu.org/packages/")))
(package-initialize) ;; You might already have this line

(global-set-key (kbd "C-x w") 'elfeed)

(defun elfeed-search-format-date (date)
  (format-time-string "%Y-%m-%d %H:%M" (seconds-to-time date)))

(setq elfeed-feeds
      '(("http://www.shakthimaan.com/news.xml" people)
        ("http://arduino.cc/blog/feed/" projects)
        ("http://planet-india.randomink.org/rss10.xml" people planet)
        ))

Gnus

Configuration

Gnus is an Emacs package for reading e-mail and Usenet news. The nnrss backend supports reading RSS feeds. Gnus is available by default in GNU Emacs. After launching emacs using emacs -Q in the terminal, you can start Gnus using M-x gnus. To add a new RSS entry, you can use ‘G R’. It will prompt you with the message ‘URL to Search for RSS:’. You can then provide the feed, for example, http://www.shakthimaan.com/news.xml. It will try to connect to the server and will provide you the message ‘Contacting host: www.shakthimaan.com:80’. After a successful connect, it will prompt for the title, ‘Title: Shakthimaan’s blog.’ You can simply hit Enter. You will then be prompted for a description, ‘Description: RSS feed for Shakthimaan’s blog.’ You can hit Enter to proceed. Now, the blog entry has been added to Gnus. In this fashion, you can add the other blog entries too. A screenshot of the main Gnus group buffer is shown below:

Gnus

Usage

You can press ‘g’ to refresh the buffer and ask Gnus to check for latest blog entries. Using the ‘Enter’ key will open the feed, and the list of blogs for a feed. A screenshot is shown in Figure 5.

Gnus articles

You can press ‘Enter’ on a blog entry, and it will open the contents in a new buffer. It will then be marked as read, indicated by ‘R’. A screenshot of a blog entry rendering text and image is shown in the following figure:

Gnus blog entry

You can press ‘q’ to quit from any level inside Gnus. You are encouraged to read the Gnus tutorial ( http://www.emacswiki.org/emacs/GnusTutorial ) and manual ( http://www.gnus.org/manual/big-gnus.html ) to learn more, and to customize it for your needs.

October 18, 2016 04:00 PM

October 01, 2016

Suraj Deshmukh

PyCon India 2016

This was my second PyCon and first visit to Delhi. I was excited to meet all old friends from dgplug, PythonPune, PyCon 2015, folks from twitter and IRC. It was different this time, because I co-hosted a Docker workshop with my colleague Lalatendu Mohanty at a large conference like this and was travelling with friends from PythonPune.

img_20160923_152730

Day 0

This was tutorials day (workshops + devsprints). We(I and Lala) started workshop early morning. Lala explained the concept of containers and how ecosystem of application deployment, delivery has changed with involvement of containers while I did the hands-on walkthrough of the workshop.

img_20160923_101538

Lala along with Praveen Kumar helped the folks during the hands-on. The workshop went well and people were curious to learn things in this changing world.

 

photo305468764721358848

After workshop I mainly sat in devsprint room and did some hacking(random things). Meanwhile Shubham Minglani, my team-mate helped folks understand ansible-container. He was mentoring in devsprint for ansible-container project. Given the limited internet speed, Shubham setup an excellent fall back plan for the folks to pull containers. He got his Raspberry Pi and wifi router where Raspberry Pi was running FTP server and wifi router was access point to that FTP server. That was an awesome setup and too much effort for potential contributors of the project.

At the end of the day, in volunteers meet, I was handed over the responsibility of Lecture Hall 2 a.k.a Audi 3, where I and a bunch of volunteers made sure things went smooth for the next two days.

Later we went for a short walk around JNU campus, where we could smell the political air in the campus.

Day 1

Day 1 started with BG‘s Keynote, but I had to rush in middle of it to take care of my hall, this started with making sure volunteers for the hall were present, speakers were present on time, schedule of the day and speaker’s info was handy.

Having handled the responsibility of hall I realised how hard it is to deal with so many behind the scene things for a conference to run smoothly. Due to this I could not attend any talk properly as I had to rush to take care of unforeseen things that would pop up. Things went well that day though.

Day 1 ended with Keynote by Van Lindberg, which I attended completely. In his awesome talk, he presented on how failures in software world are essential and how one can cope and learn from them.

img_20160924_172257_hht

Folks at Red Hat booth:

29960442065_d129327ac0_k

Day 2

Similar to Day 1, there was a keynote from Andreas Muller, from which I had to drop off in the middle to make sure all things in my hall were ready. First talk in my hall was by my colleague Ratnadeep Debnath on Real Time microservices with server side Flux, where he presented how he has implemented micro-service and async architecture in his very own project waarta and IRCB.

SONY DSC

Post which we had our usual yearly dgplug stair-case meeting, but this time, not on a staircase but on a ramp, where Kushal asked for feedback and gave general guidelines.

29880363361_801441d340_k

This time dgplug was even bigger and there were more folks doing awesome stuff. Kushal also acquainted us with his mentor – Sirtaj Singh Kang, who introduced him to Python.

29389909764_c7fe4ff1e5_k

This was followed by a Red Hat sponsored talk in my hall presented by Kushal, where he mainly talked about how Python is at heart of Red Hat’s open source projects and eco-system with projects like anaconda, Ansible, RDO, etc. to name a few.

29926454396_5ebb36c577_k

The last day of the PyCon ended with photo session with dgplug folks and various other groups.

29313770323_d3110e1d9c_k

Folks from PythonPune

29926453696_8709a73444_k

On a closing note, shout out to the volunteers viz. Shashank Kumar, Pushplata Ranjan, Prashant Jamkhande, Girish Joshi and others for being there and helping me out for two days. Things went smooth because of you and your helping hand.

This year’s PyCon was a good and a memorable experience. I made new friends and saw new places ( worth mentioning my extended stay, to explore Delhi and nearby places).

Credits

Thanks to: Kushal, Sayan, Chandan and other folks for awesome clicks, that I used in this blog. Suraj Narwade for all the pre-conference management. And finally Hemani for help with this blog.


by surajssd009005 at October 01, 2016 03:33 PM

September 30, 2016

Farhaan Bukhsh

Weechat-Tmux

Recently I have been to pycon-india (will blog about that too!) there Sayan and Vivek introduced me to weechat which is a terminal based IRC client, from the time I saw Sayan’s weechat configuration I was hooked to it.

The same night I started configuring my weechat , it’s such a beautiful IRC client I was regretting why did I not use it before. It just transforms your terminal into IRC window.

For fedora you need to do:

sudo dnf install weechat

Some of the configuration and plugins you need are :

  1. buffer
  2. notify-send

That’s pretty much it but that doesn’t stop there you can make that client little more aesthetic.  You can set weechat by using their documentation.

The clean design kind of makes you feel happy, plus adding plugin is not at all a pain. In the weechat window you just say /script install buffer.pl and it just installs it in no time.  There are various external plugin in case you want to use them and writing plugin is actually fun , I have not tried that yet.

screenshot-from-2016-09-30-23-02-13

I also use to use bigger font but now I find this size more soothing to eyes. It is because of weechat I got to know or explore about this beautiful tool called tmux ,  because on normal terminal screen weechat lags , what I mean by lag is the keystroke somehow reach after like 5-6 seconds which makes the user experience go bad.  I pinged people on IRC in #weechat channel with the query the community is amazing they helped me to set it up and use it efficiently , they only told me to use tmux or screen . With tmux my session are persistent and without any lag.

To install tmux on fedora:

sudo install tmux

tmux is a terminal multiplexer which means it can extend one terminal screen into many screen . I got to learn a lot of concepts in tmux like session, pane and windows. Once you know these things in tmux its really a funride. Some of the blogs I went through for configuring and using tmux the best I found was hamvoke , the whole series is pretty amazing . So basically my workflow goes for every project I am working on I have a tmux session named after it, which is done by the command:

tmux new-session -s <name_session>

Switching between two session can be done by attach and detach. And I have one constant session running of weechat. I thought I have explored every thing in tmux but that can’t be it , I came to know that there is a powerline for tmux too. That makes it way more amazing so this is how a typical tmux session with powerline looks like.

screenshot-from-2016-09-30-23-31-10

I am kind of loving the new setup and enjoying it. I am also constantly using tmux cheatsheet :P because it’s good to look up what else you can do and also I saw various screencast on youtube where  tmux+vim makes things amazing.

Do let me know how you like my setup or how you use it .

Till then, Happy Hacking!🙂

 


by fardroid23 at September 30, 2016 06:16 PM

Anwesha Das

My talk about software licenses in PyCon India

The title for my talk was "The trends in choosing licenses in Python ecosystem". As a lawyer what interests me is the legal foundation of the open source world. It is the licenses, which defines the reach of the software. Most of the developers would frown their eyebrows to hear that. Most of the developers think about licenses as large, boring, legal, gibberish text. So the aim of my talk was to give them an overview of licenses, why it is important and what are the best practices for the developers regarding the licenses. I framed my talk (as a lawyer) around licenses, majorly focusing on them as:

  • what are the different kinds of licenses?
  • Definition and elaborate explanation of each of open source licenses.
  • What difference between free software and open source software?
  • Little bit of my work around PyPI.
  • Why the developers has chosen some particular licenses more than the others?
  • The answers to that.
  • The best practice for the developers while choosing a license in a gist.

Three days before my traveling I got a mail from the organizers that all the proceedings of the conference such as the slides, talk videos all these things will be in Public Domain. I was shocked and stunned too. It took me a lot of time, and mails to make them really understand that by speaking in front of public does not really make anything Public Domain. I was tweeting about the same to the speakers please add license in the slides for your talk. But even in then in Day 0 in the venue itself I got to know many of speakers had no clue about what were the licenses for their talks.

I realized my talk is too lawyerish. It would be a sure super flop talk. People would leave the hall within 5 minutes. So, I needed to reframe my talk from lawyerish way to developrish way. I had only few hours (approximately 10 hours) left. Kushal really wanted to stay for his friends but I dragged him (sudo wify power:)). We reached the Airbnb where we were staying. Started rearranging my talk slides. I was stressing on:

  • My project.
  • How did I do my work the progrmish way.
  • The basic concepts of licenses, without going into much details.
  • Explaining them with real life examples which will everyone understand.
  • A special stress on Public Domain (given the problems going on in PyCon India).
  • Then the most part of my talk was covering Best Practices for the Developers.

Slides got ready by 10PM. Had a quick dinner. While waiting for the dinner wrote the Blogpost. That day only I read a blog post by Zainab Bawa about practicing talk, such a perfect timing and a great post. I discerned the fact again that I need to begin practice my reframed talk. It was almost 12. I had no time left to practice my talk. I was tensed, scared. I began practicing and of course I was not been able to do it properly. That made me more frightened, nervous. Then started the whole episode of super attack of inferiority complex, IANAD (I am Not a Developer) syndrome, crying, yelling at Kushal and many other regular melodrama before me giving any talk. I wasted almost half an hour and then practiced my talk again. It was much better now. At least Kushal liked it much more. We slept at 2PM. Got up with a shrill alarm at 5:30AM. Practiced the talk again twice. We had to leave for the venue. Reached there and did the PyLadies related work.

My talk was scheduled at 4PM. After lunch I did my PyLadies table duties for half an hour. Thanks to all my PyLadies friends they released me from the duty. I went to the devsprint room and practiced my talk over there.

I reached at lecture hall 2 where my talk was scheduled before 5 minutes of time. It took some time to do the set ups ready. I started my talk with my memory of PyCon India 2012, which made the foundation for this talk. I defined software license, copyright and different open source licenses, FOSS with various real life examples which will help developers to easily understand the legal concepts in a lucid format. I told them about my project and showed them how did I got the licenses of each package in PyPI (and I did some silly goof up over there, no excuse but I was tensed). Then I focused to prime most important part of my talk The Best Practices for the Developers. Lets discuss them :

  1. Choose a license matching the aim of regarding use case the software, if its library/module choose a permissive one So people can easily use them in their code.

  2. Create a "license" file i.e license.txt and/or license.md file. The file should have name of the license as well as the full license text of the license document.

  3. The developer must add a copyright header to each significant source code file. Significant includes both the volume and importance.

  4. In README or equivalent introductory file which contains all the basic information of the project, the license name must be stated over there. And also the reference of the same to direct the reader to the license file.

  5. In case in users freedom is the one you aim. Also you want to share the improvement made by you to the community and society. Therefore GNU GPL is the license for you. In case of GPL you need to distribute the original source code along with the modifications you have made.

  6. If you are concerned about patents and at the same time you want an open source license. Then Apache is suitable according to your need. But very importantly it only takes care of the patent part but not all other intellectual property rights (such as trademark).

  7. Please do not invent your own license. There are plenty of nicely drafted licenses meeting all your requirements. Trust the legal experts they know the law. To explain this I gave example of my little goof up which I did during typing a simple thing while explaining my project. So, I explained that as I am not the person who primarily a developer I did that mistake. If a person who is primarily a developer and not lawyer will do the same mistake in drafting a license. Covenant and conditions are several things of which you have to take care while drafting a license.

  8. If you ignore all the caution and try to draft your own license try not to have clauses like "Buy me a beer" or "Don't be evil"(please keep your funny bones to yourselves only), the legal implication of these are different.

  9. One can get several options of licenses you can have look at opensource.org, copyleft.org, FSF also Fedora Project has a nicely maintained list of open source licenses (compatible in Fedora project). choosealicense.com (by github) is another nicely maintained website for choosing licensing.

  10. By choosing a license one chooses the community. Therefore if you are confused about license or your software, choose one which is popular in your community.

Points to Remember while choosing a license by a developer

  1. Remember if you create new project, you have the copyright to that, and by default retain all rights

  2. A license says that they can use/copy/modify your creation, but only if they follow the rules of the license.

  3. If you choose to let other people use/copy your work, you should grant them a license having similar clauses.

  4. Remember by choosing a license you are making a boundary for your software.
  5. You should NEVER use something that doesn't explicitly state what the
    license is.
  6. Permissive let them use your work in a commercial or proprietary application.
  7. copyleft require them to make their changes available
    under the same license.
  8. There are huge points of distinction between copyleft and permissive licenses.

Thank you

There are few people to thank actually without whose help it would not have been possible for me to give this talk:

Van Lindberg: What ever little law related to open source I have learnt it is the Van-ish way. His book has helped me immensely. Thank you Van for checking my slides and giving me enough confidence that my understanding is right.

Nick Coghlan: Thank you for giving your valuable advice whenever I needed them. Moreover for your moral support when I was down.

Donald Stufft: Thank you for reply to my mail for reviewing my talk. Moreover thank you for doing the gigantic work of maintaining PyPI.

Jared Smith: About you where should I start from. Whenever there is a problem regarding this talk, my understanding, the framing of my talk, anything, you helped. Be it the early morning video sessions or the long (really huge) mail thread, thank you so much for helping me.

I will end with a note that the whole foundation of open source is its licenses. Therefore please be sure when you choose one.

by Anwesha Das at September 30, 2016 03:51 PM

Trishna Guha

What is if __name__ == ‘__main__’ ?

 

Module is simply Python file that has .py extension. Module can contain variables, functions, classes that can be reused.

In order to use module we need to import the module using import command. Check the full list of built-in modules in Python here https://docs.python.org/3.6/library.

The first time a module is loaded in to running Python script, it is initialized by executing the code in the module once. To know various ways of importing modules visit here: https://docs.python.org/3.6/tutorial/modules.html

if __name__ == ‘__main__’:

We see if __name__ == ‘__main__’: quite often. Let’s see what this actually is.

__name__ is global variable in Python that exists in all namespaces. It is attribute of module. It is basically the name of the module as str (string) type.

Show Me Code:

Create a file named ‘mymath.py’ and type the following code and save it. We have defined a simple mathematical square method here.

screenshot-from-2016-09-30-12-51-33

Now create another file named ‘result.py’ in the same directory and type the following code and save it.

screenshot-from-2016-09-30-12-57-10

Now on terminal run the program with ‘python3 result.py’
fotoflexer_photo

Here we have defined a method in a module and using it in another file.

Now let’s look into if __name__ == ‘__main__’:

Open the ‘mymath.py’ file and edit it as given in following:

screenshot-from-2016-09-30-13-56-50

Leave ‘result.py’ unchanged.

Now on your terminal run ‘result.py’. 

fotoflexer_photo1

Here we have imported the module mymath. The variable __name__ is set to the name of the module that is imported.

Now on terminal run ‘mymath.py’

fotoflexer_photo3

We have run the file mymath.py as program itself. And you can see here the variable __name__ is set to the string “__main__”.
And we have checked if __name__ == “__main__” is True execute the following instructions which means if the file is run as standalone program itself execute the following instructions.

If you do  print(type(__name__)) in the program, you will see it returns ‘str’ (string) type.

Happy Coding!


by Trishna Guha at September 30, 2016 08:59 AM

August 24, 2016

Sayan Chowdhury

Autocloud: What's new?

Autocloud was released during the Fedora 23 cycle as a part of the Two Week Atomic Process.

Previously, it used to listen to fedmsg for successful Koji builds. Whenever, there is a new message the AutocloudConsumer queues these message for processing. The Autocloud job service then listens to the queue, downloads the images and runs the tests using Tunir. A more detailed post about it’s release can be read here.

During the Fedora 24 cycle things changed. There was a change on how the Fedora composes are built. Thanks to adamw for writing a detailed blogpost on what, why and how things changed.

With this change now autocloud listens to the compose builds over the fedmsg. The topic being “org.fedoraproject.prod.pungi.compose.status.change”. It checks for the messages with the status FINISHED and FINISHED_INCOMPLETE.

After the filtration, it gets the Cloud Images built during that particular compose using a tool called fedfind. The job here is parse the metadata of the compose and getting the Cloud Images. These images are then queued into both libvirt and vbox boxes. The Autocloud job service then downloads the images and run the tests using Tunir.

Changes in the Autocloud fedmsg messages.

Earlier the messages with the following topics were sent

Now along with the fedmsg message for the status of the image test. Autocloud also sends messages for the status of a particular compose.

The compose_id field was added to the autocloud.image.* messages

Changes in the UI

  • A page was added to list all the composes. It gives an overview of the composes like if it’s still running, number of tests passed, etc
  • The jobs page lists all the tests data as earlier. We added filtering to the page so filter the jobs based on various params
  • You need to agree that the jobs output page looks better than before. Now, rather the showing a big dump of text the output now is properly formatted. You can now reference each line separately.

Right now, we are planning to work on testing the images uploaded via fedimg in Autocloud. If the project looks interesting and you are planning to contribute? Ping us on #fedora-apps on Freenode.

August 24, 2016 11:58 AM

July 30, 2016

Suraj Deshmukh

Kubernetes: HorizontalPodAutoScaler and Job

To try out following demos setup your environment as mentioned in here.

Git clone the demos repo:

git clone https://github.com/surajssd/k8s_demos/
cd k8s_demos/

Horizontal Pod Autoscaler

Once you have all the setup done follow the video for demo instructions

Job

Once you have all the setup done follow the video for demo instructions


by surajssd009005 at July 30, 2016 11:53 AM