Planet dgplug

February 20, 2017

Trishna Guha

Automate Building your Own Atomic Host

Project Atomic hosts are built from standard RPM packages which have been composed into filesystem trees using rpm-ostree. This post provides method for automation of Building Atomic host (Creating new trees).

Requirements

Process

Clone the Git repo on your working machine Build-Atomic-Host.

$ git clone https://github.com/trishnaguha/build-atomic-host.git
$ cd build-atomic-host

Create VM from the QCOW2 Image

The following creates VM from QCOW2 Image where username is atomic-user and password is atomic. Here atomic-nodein the instance name.

$ sudo sh create-vm.sh atomic-node /path/to/fedora-atomic25.qcow2
# For example: /var/lib/libvirt/images/Fedora-Atomic-25-20170131.0.x86_64.qcow2

Start HTTP Server

The tree is made available via web server. The following playbook creates directory structure, initializes OSTree repository and starts the HTTP server.

$ ansible-playbook httpserver.yml --ask-sudo-pass

Use ip addr to check IP Address of the HTTP server.

Give OSTree a name and add HTTP Server IP Address

Replace the variables given in vars/atomic.yml with OSTree name and HTTP Server IP Address.

For Instance:

# Variables for Atomic host
atomicname: my-atomic
httpserver: 192.168.122.1

Here my-atomic is OSTree name and 192.168.122.1 is HTTP Server IP Address.

Run Main Playbook

The following playbook installs requirements, starts HTTP Server, composes OSTree, performs SSH-setup and rebases on created Tree.

$ ansible-playbook main.yml --ask-sudo-pass

Check IP Address of the Atomic instance

The following command returns the IP Address of the running Atomic instance

$ sudo virsh domifaddr atomic-node

Reboot

Now SSH to the Atomic Host and reboot it so that it can reboot in to the created OSTree:

$ ssh atomic-user@<atomic-hostIP>
$ sudo systemctl reboot

Verify: SSH to the Atomic Host

Wait for 10 minutes, You may want to go for a Coffee now.

$ ssh atomic-user@192.168.122.221
[atomic-user@atomic-node ~]$ sudo rpm-ostree status
State: idle
Deployments:
● my-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.1 (2017-02-07 05:34:46)
        Commit: 15b70198b8ec7fd54271f9672578544ff03d1f61df8d7f0fa262ff7519438eb6
        OSName: fedora-atomic

  fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.51 (2017-01-30 20:09:59)
        Commit: f294635a1dc62d9ae52151a5fa897085cac8eaa601c52e9a4bc376e9ecee11dd
        OSName: fedora-atomic

Now you have the Updated Tree.

Shout-Out for the following folks:

My future post will have customizing packages (includes addition/deletion) for OSTree.


by Trishna Guha at February 20, 2017 08:10 AM

February 11, 2017

Kushal Das

Running OpenShift using Minishift

You may already hear about Kubernetes or you may be using it right now. OpenShift Origin is a distribution of Kubernetes, which is optimized for continuous development and multi-tenant deployment. It also powers the Red Hat OpenShift.

Minishift is the upcoming tool which will enable you to run OpenShift locally on your computer on a single node OpenShift cluster inside a VM. I am using it on a Fedora 25 laptop, with help of KVM. It can also be used on Windows or OSX. For KVM, I first had to install docker-machine-driver-kvm. Then downloaded the latest minishift from the releases page. Unzip, and put the binary in your path.

$ ./minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
E0209 20:42:29.927281    4638 start.go:135] Error starting the VM: Error creating the VM. Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.42.243:2376": tls: DialWithDialer timed out
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
. Retrying.
Provisioning OpenShift via '/home/kdas/.minishift/cache/oc/v1.4.1/oc [cluster up --use-existing-config --host-config-dir /var/lib/minishift/openshift.local.config --host-data-dir /var/lib/minishift/hostdata]'
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v1.4.1 image ... 
   Pulling image openshift/origin:v1.4.1
   Pulled 0/3 layers, 3% complete
   Pulled 0/3 layers, 24% complete
   Pulled 0/3 layers, 45% complete
   Pulled 1/3 layers, 63% complete
   Pulled 2/3 layers, 81% complete
   Pulled 2/3 layers, 92% complete
   Pulled 3/3 layers, 100% complete
   Extracting
   Image pull complete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ... 
   Using Docker shared volumes for OpenShift volumes
-- Creating host directories ... OK
-- Finding server IP ... 
   Using 192.168.42.243 as the server IP
-- Starting OpenShift container ... 
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Adding default OAuthClient redirect URIs ... OK
-- Installing registry ... OK
-- Installing router ... OK
-- Importing image streams ... OK
-- Importing templates ... OK
-- Login to server ... OK
-- Creating initial project "myproject" ... OK
-- Removing temporary directory ... OK
-- Server Information ... 
   OpenShift server started.
   The server is accessible via web console at:
       https://192.168.42.243:8443

   You are logged in as:
       User:     developer
       Password: developer

   To login as administrator:
       oc login -u system:admin

The oc binary is in ~/.minishift/cache/oc/v1.4.1/ directory, so you can add that in your PATH. If you open up the above-mentioned URL in your browser, you will find your OpenShift cluster is up and running well.

Now you can start reading the Using Minishift to start using your brand new OpenShift Cluster.

by Kushal Das at February 11, 2017 06:46 AM

Farhaan Bukhsh

Hacking on Pagure CI

“Ahaa!” I got a lot of ahaa moments when I was hacking on Pagure CI ,  Pagure CI’s initial draft was laid by lsedlar and I have blogged about it followed by me and Pingou. Pingou has done really amazing work with the flow and refactoring of code to making beautiful api calls.

I had great time hacking around it and got a bunch of learning. Few of the learning are :

  1. Try to do the minimal work in setting up the development environment mock everything that is available for testing.
  2. Think deeply about something when your mentor points it to you.

So the issue I was working on is a long pending one the issue was to attach build ID to all the Jenkins build Pagure was getting . Reason why attaching build id’s are necessary is to distinguish between different builds and to make the link to Jenkins a bit more specific for example if a build fail which was that build.

The first mistake I did was setting up Jenkins on my machine I had it previously but since my machine went under a kernel panic I lost all data related to Jenkins , now Fedora 25 has some packaging issue when installing  Jenkins directly. But anyhow from Jenkins site I got a way to set it up and it worked for me. In the mean while Pingou was pointing it out that I actually don’t need Jenkins instance but I was not able to get him on that and I really feel bad about it.

After setting up Jenkins the other task for me was to configure it , which was really easy because I have done it before and also because it was well documented. For setting up the documentation is fine but for hacking on the CI you need a little less work.

Step 1

Set up REDIS on your machine , you can do that with installing redis using sudo dnf install redis and enable the service using sudo systemctl enable redis and then start the service using sudo systemctl start redis. Along with this you need to add config for redis in default_config.py or which ever config file you are giving to the server using --config. The configuration are well documented in pagure.cfg.sample.

Step 2

Now, copy the pagure-ci-server from pagure-ci directory into the parent directory. Now once you have done that , this step is necessary because this is the service that run for pagure-ci. Now you just have to run pagure-ci-server by python pagure-ci-server.py. Once this started your service will be up and running.

Step 3

Now you just fire up your instance and make a project , have two branches and open a PR form once branch to other, if you get some authentication error that is most probably because you not done the right permission for users to use Jenkins, this is not recommended but you can entirely turn off the security of Jenkins just because you are testing something.

If you have done everything correct you will see the Jenkins flag being attached to the Pull Request.

VERY IMPORTANT NOTE:

All this could be saved if I have just used python-jenkins to fetch a job from Fedora Jenkins instance and send it as a flag to my PR. Thank you Pingou for telling me this hack.

Happy Hacking!


by fardroid23 at February 11, 2017 04:35 AM

February 10, 2017

Kushal Das

Running gotun inside Jenkins

By design gotun is a command line tool which can be called from other scripts, or any larger system. In the world of CI, Jenkins is the biggest name. So, one of the goals was also being able to execute within Jenkins for tests.

Setting up a Jenkins instance for test

vIf you don’t have a setup for Jenkins already, you can just create a new one for staging using the official container. For my example setup, I am using the same at http://status.kushaldas.in

Setting up the first job

My only concern was how to setup the secrets for authentication information on Jenkins (remember I am a newbie in Jenkins). This blog post helped me to get it done. In the first job, I am creating the configuration (if in future we add something dynamic like the image name there). The secrets are coming from the ENV variables as described in the gotun docs. In the job, I am running the Fedora Atomic tests on the image. Here is one example console output.

Running the upstream Atomic host tests in gotun inside Jenkins

My next task was to run the upstream Project Atomic host tests using the similar setup. All the configuration file for the tests are available on this git repo. As explained in a previous post, onevm.py creates the inventory file for Ansible, and then runsetup.sh executes the playbook. You can view the job output here.

For both the jobs, I am executing a Python script to create the job yaml files.

by Kushal Das at February 10, 2017 03:57 PM

February 08, 2017

Anwesha Das

A date with Microbit @ February meetup of PyLadies Pune

Pyladies Pune is 8 meetups old now, (after the rebooting). I missed last month's meetup, as Py was unwell. This month's meet up was special. It was the first time the PyLadies meetup took place in reserved-bit, the coolest place in Pune. Thank you reserved-bit for hosting us. The next thing was the fact that it was a session about microbit. A super huge thanks to ntoll for sending us the hardwares.
Microbits are one of our dearest possessions (me and Kushal. I am learning coding in this and I really wanted to share the experience, the fun with my fellow PyLadies.
I left home early leaving the instructor for the session alone with his daughter (and I was so happy about it). We are just two weeks away from PyCon Pune, and hence, decided our first agenda for the meetup would be to discuss our plan for PyCon. The girls came up with really nice ideas about the booth. We have also decided about what us PyLadeis wish to do during the devsprint.

Mummy v/s Daddy

The daddy-daughter duo arrived. Both of them were looking like they had just returned from the war front. Mommy to the rescue! Py was really excited to see her “bestest friend” Ira over there. With each passing meetup, our Pybabies are becoming increasingly comfortable about their mummies working. Barring a few exceptions, Py was at her best behaviour this time.

The Fun begins

Kushal started the session after his quick coffee break. He introduced the group to microbit, the tiny computer (a wave of "wow" blew over the room :). He asked us to download Mu editor from his local cache. We had decided each participant would get a chance to work with a microbit. And then came the twist in the tale. Kushal forgot to bring the microbits. (This happens when wife leaves home early.) After throwing him an angry glance, we requested Sayan to get it. Sayan readily agreed. Peace prevailed! Meanwhile, Kushal gave us a problem - to find out the groups of the current user in the Linux systems. Windows users were to share a group file.

The arrival of microbits

photo courtesy: Sayan Chowdhury

Finally the microbits arrived. Each participant got a microbit. We opened up the documentation for microbit. We got started with scrolling images with the typical "hello PyLadies" on the display board. Post lunch, we tried playing music, speech and other features. We plugged our earphones to the microbits with the alligator-chip cables. The last and the best part of the workshop was working with the radio module. We were sending various messages. It was such fun to see those on each other's device. Nisha and Siddhesh went out to actually check the range. It covers a large area.

photo courtesy: Kushal Das

Well after the session was formally over, people actually stayed back and continued working, trying things on their own (may be the little magic of being inside a hackerspace). We have decided to work on it during PyCon Pune devsprints. As Nisha said "It is the best PyLadies meetup we have ever had".

by Anwesha Das at February 08, 2017 08:39 AM

February 07, 2017

Shakthi Kannan

GNU Emacs - HTML mode, indentation and Magit

[Published in Open Source For You (OSFY) magazine, April 2016 edition.]

This article in the GNU Emacs series takes readers on how to use HTML mode, do indentation, and use the Magit interface.

HTML mode

You can use HTML mode to effectively edit HTML and CSS files using GNU Emacs. To start the mode, use M-x html-mode. You will see the string ‘HTML’ in the mode line.

Default template

A default HTML template can be started by opening a test.html file, and using C-c C-t html. It will produce the following content:

<html>
  <head>
<title>

You will then be prompted with the string ‘Title:’ to input the title of the HTML page. After you type ‘Hello World’, the default template is written to the buffer, as follows:

<html>
  <head>
<title>Hello World</title>
</head>
<body>
<h1>Hello World</h1>

<address>
<a href="mailto:user@hostname">shakthi</a>
</address>
</body>
</html>

Tags

You can enter HTML tags using C-c C-t. GNU Emacs will prompt you with the available list of tags. A screenshot of the available tags is shown in Figure 1:

HTML tags

The anchor tag can be inserted using ‘a’. You will then receive a message prompt: ‘Attribute:’. You can provide the value as ‘href’. It will then prompt you for a value, and you can enter a URL, say, ‘http://www.shakthimaan.com’. The anchor tag will be constructed in the buffer as you input values in the mini-buffer. You will be prompted for more attributes. If you want to finish, simply hit the Enter key, and the anchor tag will be completed. The final output is shown below:

<a href="http://www.shakthimaan.com"></a>

You can insert a h2 tag by specifying the same after C-c C-t. You can also add any attributes, as required. Otherwise, simply hitting the Enter key will complete the tag. The rendered text is as follows:

<h2></h2>

You can insert images using the alt tag. You can specify the src attribute and a value for the same. It is also a good practice to specify the alt attribute for the image tag. An example is shown below:

<img alt="image" src="http://shakthimaan.com/images/ShakthiK-workshop-A4-poster.png">

Unordered lists can be created using C-c C-t followed by ‘ul’. It will then prompt you for any attributes that you want included in the tag. You can hit the Enter key, which will prompt you with the string ‘List item:’ to key in list values. An example of the output is shown below:

<ul>
  <li>One
    <li>Two
      <li>Three
</ul>

You can neatly align the code by highlighting the above text and indenting the region using C-M-\. The resultant output is shown below:

<ul>
  <li>One
  <li>Two
  <li>Three
</ul>

If you wish to comment out text, you can select the region and type M-q. The text is enclosed using “<!--” and “-->”. For example, the commented address tags in the above example look like what follows:

<!-- <address> -->
<!-- <a href="mailto:shakthi@achilles">shakthi</a> -->
<!-- </address> -->

A number of major modes exist for different programming environments. You are encouraged to try them out and customize them to your needs.

Accents

In HTML mode, you can insert special characters, accents, symbols and punctuation marks. These characters are mapped to Emacs shortcuts. Some of them are listed in the following table:

Shortcut Character
C-x 8 ’ a á
C-x 8 " e ë
C-x 8 / E Æ
C-x 8 3/4 ¾
C-x 8 C ©
C-x 8 L £
C-x 8 P
C-x 8 u µ
C-x 8 R ®
C-x / / ÷

Indentation

Consider the following paragraph:

“When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.”

You can neatly fit the above text into 80 columns and 25 rows inside GNU Emacs using M-q. The result is shown below:

When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.

You can also neatly indent regions using the C-M-\ shortcut. For example, look at the following HTML snippet:

<table>
<tr>
<td>Tamil Nadu</td>
<td>Chennai</td> 
</tr>
<tr>
<td>Karnataka</td>
<td>Bengaluru</td> 
</tr>
<tr>
<td>Punjab</td>
<td>Chandigarh</td> 
</tr>
</table>

After indenting the region with C-M-\, the resultant output is shown below:

<table>
  <tr>
    <td>Tamil Nadu</td>
    <td>Chennai</td> 
  </tr>
  <tr>
    <td>Karnataka</td>
    <td>Bengaluru</td> 
  </tr>
  <tr>
    <td>Punjab</td>
    <td>Chandigarh</td> 
  </tr>
</table>

If you have a long line which you would like to split, you can use the C-M-o shortcut. Consider the quote:

“When you’re running a startup, your competitors decide how hard you work.” ~ Paul Graham

If you keep the cursor after the comma, and use C-M-o, the result is shown below:

"When you're running a startup, 
                                your competitors decide how hard you work." ~ Paul Graham

Magit

Magit is a fantastic interface to Git inside GNU Emacs. There are many ways in which you can install Magit. To install from the Melpa repository, add the following to your ~/.emacs:

(require 'package)
(add-to-list 'package-archives
             '("melpa" . "http://melpa.org/packages/") t)

When you do M-x list-packages, you will see ‘magit’ in the list. You can press ‘i’ to mark Magit for installation, followed by ‘x’ to actually install it. This will install Magit in ~/.emacs.d/elpa. The version installed on my system is magit-20160303.502.

When you open any file inside GNU Emacs that is version controlled using Git, you can start the Magit interface using M-x magit-status. I have bound this key to C-x g shortcut in ~/.emacs using the following:

(global-set-key (kbd "C-x g") 'magit-status)

The default magit screenshot for the GNU Emacs project README file is shown in Figure 2.

Magit

Pressing ‘l’ followed by ‘l’ will produce the history log in the magit buffer. A screenshot is provided in Figure 3.

History

You can make changes to the project sources and stage them to the index using the ’s’ shortcut. You can unstage the changes using the ‘u’ shortcut. After making changes to a file, you need to use M-x magit-status to update the Magit buffer status.

A sample screenshot of the modified files and staged changes is shown in Figure 4.

Staged

You can hit TAB and Shift-TAB to cycle through the different sections in the Magit buffer. To commit a message, press ‘c’ followed by ‘c’. It will pop up a buffer where you can enter the commit message.

You can create and checkout branches using the ‘b’ shortcut. A screenshot of the magit branch pop-up menu is shown in Figure 5.

Branch

All the basic Git commands are supported in Magit - diffing, tagging, resetting, stashing, push-pull, merging and rebasing. You are encourged to read the Magit manual ( https://magit.vc/ ) to learn more.

February 07, 2017 07:30 PM

February 06, 2017

Sayan Chowdhury

PyLadies Pune Meetup - February 2016

The PyLadies Pune February Meetup was held on 6th Feb at reserved-bit. Kushal took a session on MicroPython on the MicroBit boards. Thanks to @ntoll for sending over the MicroBits for workshops.

I reached down to the venue a bit late. On reaching the venue, Kushal told me that he forgot the boards at home and I went to his place to pick the MicroBits.

Most of the participants were regular participants for the meetup so Kushal took a quick course on Python. After lunch, they started referring the documentation for the MicroBit - MicroPython and played around with the device by displaying their name, using the button to simulate actions.

During the end of the workshop, Kushal demoed the radio module and that turned out to be the most interesting part of the workshop. Kushal sent the first radio message “Hello PyLadies” and all the other peeps were able to see the message in their MicroBit. After that everyone started sending out messages to each other. Nisha ran out of the hackerspace to check the range of the radio.

The meetup ended after a small discussion about PyCon Pune.

February 06, 2017 03:00 PM

Trishna Guha

PyCon India 2016

Heya! First of all I’m really sorry for such a delay with PyCon India 2016 blog post.

It was my first PyCon India. I have always wanted to attend such nice conference. But somehow I was probably going to miss it because of fund. It was only DGPLUG for which I could attend PyCon India 2016. They made sure about my travelling from Pune to Delhi and my accommodation. Have a look at 🙂 https://kushaldas.in/posts/dgplug-contributor-grant-recipient-trishna-guha.html.

DAY 1 started with Workshop and Openspace. I stayed in the Openspace since I didn’t buy ticket for workshop. I came to know about an useful project Ansible-Container from Shubham Miglani. We started hacking on the project. I also created an issue for the project but couldn’t work on the patch since the patch was already fixed and out with next release.

I met many of the faces whom I used to know on IRC/Twitter only. It was really exciting and my first day of the conference was over.

The main conference started from Day 2. It started with Keynote by Baishampayan Ghose. He gave a nice keynote on building bridges, distributed architecture and functional testing. Then the other talks carried on.  I was at the Red Hat and PyLadies booth most of the time.

We had a keynote by VanL as well about software design and failure which was great.

Many people came down with the interest about internship at Red Hat and joining PyLadies Community. We had DGPLUG + PyCon India Dinner at BBQ- Delhi later at night.

Day 3 started with Keynote by Andreas Muller on Machine Learning. Then multiple tracks talks carried on. I really enjoyed microservices talk by Ratnadeep Debnath. We had DGPLUG staircase meeting. There after we had open discussion on PyLadies – Diversity and FOSS community with Paul Everitt, Dmitry Filippov, VanL.

There was Red Hat sponsored talk by Kushal Das.

Oh yes I gave a short lightning talk on Project Atomic and Fedora Infrastructure Application Bodhi as well. The day ended with DGPLUG photo shoot.

We had dinner outside and headed back to Pune that night.

Below are the few photos I have :-).

ctqgad2vmaag13g

29313770323_05feccdc10_z

For more photos visit: https://www.flickr.com/photos/sayanchowdhury/albums/72157674406421245

We are going to have another conference on Python really soon Pycon Pune 2017 :-).


by Trishna Guha at February 06, 2017 01:12 PM

February 04, 2017

Sayan Chowdhury

Redesigning fedimg (Part 2): Communication with AWS

In the previous post, I discussed what is fedimg and how it works currently. In this post, I plan to explain the issue in the current uploading process of the AMIs and how we plan to fix it.

What’s the problem?

fedimg boots an utility instance using the RHEL AMI configured for that region.

The problem with this design is that while adding a new region we need to look for the appropriate RHEL AMI in the newer region first. Afterward, we need to test if fedimg works with the newer regions RHEL AMIs. Finally, if everything turns out to be okay we can go ahead with adding the AMI to the fedimg configuration.

This becomes a tedious task over time and delays adding a new region.

How does redesign fixes this?

The new design completely removes the dependency of the utility instances. fedimg would then utilize euca-import-volume to upload the raw cloud image to create a volume via S3.

fedimg will keep retrying to check if the volume is read using euca-describe-conversion-tasks. Once the volume is created, fedimg goes ahead with creating the snapshot of the volume and then finally the AMI.

This change results in removing a lot of redundant code and a simpler fedimg configuration file [1][2]. I have been working on this for sometime now.

Adding a new region now becomes effortless as we just need to append the region to the configuration file.

Fedimg also right now boots up test instances to test the AMIs and does a basic /bin/true test. This test just ensures that the machine boots and nothing more than that.

In the next post, I will be writing on how I will be going ahead to build a testing infrastructure for the AMIs using Autocloud and ResultsDB.

February 04, 2017 06:30 AM

February 02, 2017

Shakthi Kannan

Parabola GNU/Linux Libre on Lenovo E460

I wanted to buy a laptop for my personal use, and exclusively for Free Software. I have mostly used Lenovo Thinkpads because of their good support for GNU/Linux. I neither wanted an expensive one nor did I want to pay for Windows. The use of SSD would help speed up the boot time. A wide screen laptop will be helpful to split the Emacs frame. I looked at various Netbooks in the market, but, the processing power was not good enough. I really need computing power, with at least four CPUs. 8 GB of RAM with room for expansion will be helpful in future. I prefer to use anti-glare matte displays as they are easy on the eyes. Also, I wanted a laptop that is less than 2kg, and is easy to carry around.

After reviewing many models and configurations, I purchased the Lenovo E460 model that comes with FreeDOS, and swapped the default HDD for SSD (< 500 GB).

Lenovo E460 screen

Specification

  • Intel(R) Core(TM) i5-6200 CPU @ 2.30 GHz (4 processors).
  • 14" display
  • 437 GB SSD disk
  • 8 GB RAM
  • Intel Corporation HD Graphics 520
  • Intel Dual Band Wireless-AC 3165 Plus Bluetooth
  • Intel I219-V Ethernet Controller
  • 3 USB ports
  • 1 HDMI port
  • 4-in-1 Card Reader
  • FreeDOS
  • 1.81 kg

Parabola GNU/Linux Libre

I tried Trisquel GNU/Linux first on on this laptop. It is a derivative of Ubuntu without non-free software. I experimented with Qubes OS, but, its dom0 has proprietary blobs. GNU Guix is an interesting project, but, it does not have all the packages that I need (yet). I liked rolling distributions, and hence decided to try Parabola GNU/Linux Libre, a derivative of Arch, without the binary blobs.

There is no CD/DVD drive on this laptop, but, you can boot from USB. I first checked if all the software that I need are available in the Parabola GNU/Linux Libre repository, and then proceeded to install the same. I always encrypt the disk during installation. I have the Mate desktop environment with XMonad setup as a tiling window manager.

Lenovo E460 screen

Audio works out of the box. I do not use the web cam. I had to use the package scripts to install Grisbi as it was not available in the base repository. Virtualization support exists on this hardware, and hence I use Virtual Machine Manager, QEMU and libvirt.

Command Output

All the hardware worked out of the box, except for the wireless which requires a binary blob. So, I purchased a ThinkPenguin Wireless N USB Adapter for GNU/Linux which uses the free ath9k Atheros wireless driver.

As mandatory, I am providing some command outputs.

$ lspci

00:00.0 Host bridge: Intel Corporation Skylake Host Bridge/DRAM Registers (rev 08)
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 520 (rev 07)
00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21)
00:14.2 Signal processing controller: Intel Corporation Sunrise Point-LP Thermal subsystem (rev 21)
00:16.0 Communication controller: Intel Corporation Sunrise Point-LP CSME HECI #1 (rev 21)
00:17.0 SATA controller: Intel Corporation Sunrise Point-LP SATA Controller [AHCI mode] (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 9d12 (rev f1)
00:1c.5 PCI bridge: Intel Corporation Sunrise Point-LP PCI Express Root Port #6 (rev f1)
00:1f.0 ISA bridge: Intel Corporation Sunrise Point-LP LPC Controller (rev 21)
00:1f.2 Memory controller: Intel Corporation Sunrise Point-LP PMC (rev 21)
00:1f.3 Audio device: Intel Corporation Sunrise Point-LP HD Audio (rev 21)
00:1f.4 SMBus: Intel Corporation Sunrise Point-LP SMBus (rev 21)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection I219-V (rev 21)
01:00.0 Network controller: Intel Corporation Intel Dual Band Wireless-AC 3165 Plus Bluetooth (rev 99)
02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS522A PCI Express Card Reader (rev 01)

$ uname -a

Linux aether 4.8.17-gnu-1 #1 SMP PREEMPT Wed Jan 18 05:04:13 UYT 2017 x86_64 GNU/Linux

$ df -h

Filesystem             Size  Used Avail Use% Mounted on
dev                    3.7G     0  3.7G   0% /dev
run                    3.7G  920K  3.7G   1% /run
/dev/mapper/cryptroot  437G   95G  321G  23% /
tmpfs                  3.7G   26M  3.7G   1% /dev/shm
tmpfs                  3.7G     0  3.7G   0% /sys/fs/cgroup
tmpfs                  3.7G  196K  3.7G   1% /tmp
/dev/sda1              976M   45M  865M   5% /boot
tmpfs                  745M   28K  745M   1% /run/user/1000

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           7.3G        2.4G        3.2G         83M        1.7G        4.6G
Swap:          2.1G          0B        2.1G

$ lscpu

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    2
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 78
Model name:            Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
Stepping:              3
CPU MHz:               499.951
CPU max MHz:           2800.0000
CPU min MHz:           400.0000
BogoMIPS:              4801.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              3072K
NUMA node0 CPU(s):     0-3
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp

Conclusion

Lenovo E460 screen

I have been using the laptop for more than three months, and it has really been a smooth experience. It costed less than ₹ 55,000. The battery life is decent. I printed couple of Free Software stickers to identify my laptop. The “Inside GNU/Linux” sticker covers the web cam, and the “Free Software Foundation” sticker is pasted behind the screen. The folks at #parabola IRC channel on irc.freenode.net are quite helpful. The Parabola GNU/Linux Libre Wiki has excellent documentation for your reference.

February 02, 2017 04:15 PM

January 22, 2017

Jaysinh Shukla

Visit to Indian Linux User Group, Chennai

Indian Linux User Group Chennai

Lately I was travelling to Chennai for some personal work. I was very clear on meeting Mr. Shakthi Kannan. While travelling to Chennai I dropped a mail inquiring about his availability. He replied with an invitation to attend the Meetup of Indian Linux Users Group, Chennai scheduled at IIT Madras. I happily accepted the invitation and decided to attend the Meetup.

The Meetup was happening in the AeroSpace Engineering department, IIT Madras. The campus is so huge that it took 15 minute bus journey to get to the department. Because I was late, I missed the initial talk on Emacs Org mode by Shakthi. I was lucky enough to attend lightning talk section. When I entered one young boy was demonstrating BeFF. I was not aware about this tool before his talk. After his words I realized BeFF is a penetration testing tool and now I am confident for switching to its documentation and start going through it. His talk ended with little discussion on doubts. Nearly one hour was still remaining Shakthi came forward and invited interested people to present a lightning talk. That sure rang a bell! I was not sure about attending Meetup and presenting something instantly was looking tough. Mohan moved forward and demonstrated How memory mapping works in GNU/Linux system. During his talk I quickly skimmed on a list of topics I was aware about and I decided to speak on JSON Web Token. I introduced myself and demonstrated ways to generate a secure token. Discussed about architecture a bit and gave few guidelines, points to remember. I ended my talk with comparing Oauth 2.0 with JWT. Few interested people asked questions and I was lucky enough to solve their confusions.

I realized it is good to have small and interested audience than having large and distracted audience. This group has nice experienced people in the GNU/Linux domain. If Chennai is your town, I will promote you to join this group and get involved.

Proofreaders: Shakthi Kannan, Harsh Vardhan

January 22, 2017 12:00 AM

January 11, 2017

Jaysinh Shukla

Python Express

What is Python Express?

PythonExpress is moment initiated by PSSI. is online tool which helps colleges to schedule Python workshop to their college. The tool is prepred by Python Software Society of India.

The moment initiated by PSSI with a tag Python month. The Python month was celebrated before one month of the Pycon India. During this year, member of community spare little time and reaches near by colleges for conducting python workshops. Purpose of this event is to encourage students to join PyCon India.

Early Activities

January 11, 2017 12:00 AM

December 20, 2016

Anwesha Das

Making their ways: PyLadies Pune

July, 2017 PyLadies Pune came out of hibernation and rebooted its journey. Let me tell you honestly it has been quite a roller coaster ride since then. We faced a lot of problems but among them the biggest one was lack of participation by women. For the first month the male female ratio for the meetup was 60% - 40%. I, as an organizer, thought as it is the first meet up women did not show up much. But the ratio of women decreased in the next meet up almost by 30%. I was worried. Where were we going wrong?
Was the content for the meetup not good enough? Or something else? We focused on this “something else”.

Publicity:

People actually didn't know about PyLadies Pune. We started addressing this problem by going to several colleges, IT companies, conferences. This significantly increased the number of of new members joining PyLadies Pune and attending events.

Inspire each other

One thing we were very sure about from the beginning was that it would inspire people to see others learning something new, getting better opportunities (getting better jobs may be). Actually getting better in any way after coming to this meetup. Yes, this aspect is working, with new participants in turn inspiring others to come along.

The big one: Social media

Honestly I am having a hard time getting attendees to appreciate just how important or big social media is these days. Every meet up I allot at least 5 minutes of time to asking them "Please blog, tweet. it's important” And explain them why is it so. In our very 2nd meet up we had a session on Communication skills.
But I realized that only asking attendees to blog will not work. Something should be there which will give the blogs more visibility, and reach a bigger audience. When the number of readers increases and they get good comments, post authors may feel maintaining their blog is a good use of their time. So we created our own Planet PyLadies Pune.

Boost confidence

Boosting the confidence of the participants is a major goal for us. We keep lightning talks session in our meetups. In this session girls talk about their work in Python. It gives them a chance to speak in public. As they are talking in front of their friends, it becomes easier for them to talk.

November meetup of PyLadies Pune

The meet up was scheduled after a long diwali vacation on 13th of November, 2016. I reached out early and made all the arrangements required for the meet up. Py was there with me for the meet up. These days even she likes the PyLadies meetups, as this gives her a chance to play all day while Mommy is busy coding.

It was 10:45AM, I freaked out, as my speaker for the day, Sayan, had not yet shown up, and his phone was not working. But to my great relief he came there at 11 PM, bang on time.

Suddenly I noticed something that brought a big smile on my face. There were 12 women present at the meet up. More women than men. Along with our regular participants (more or less 8 women), there were a good number of new faces.

I settled down, Sayan was about to take his session.

If someone intends to work offline, contribute in the open source or in collaborative manner, she must know about git. I got to know about the importance of git, as I was submitting patches. I learned few commands of git. It seemed quite difficult for me. And when I was stuck with an issue, my in house help, husband dear, could not solve it and referred me to Sayan. He made it so clear and easy. Then I thought, “Why not ask him to lead a session on git for PyLadies?” And he readily agreed.

Sayan started his session. He started it with explaining ”What is git?”, “Why is it important?”, and then moved on to terms like : patch, merge, PR(Pull Request) etc. Then he slowly moved into more details and basic commands of git like:

git init
git config --global user.name “username”
git config --global user.email “userEmail”
git add
git commit -m "Commit message"
git push origin master
git status
git remote add origin
git checkout -b
git fetch origin

It was a very nice and useful session for us. With a little bit of disturbance caused by Py, I could attend the whole session.

After lunch Nisha took her session on creating our own Map using GeoJSON. She gave the this talk at FudCon this year. So we wanted her to give it for all those PyLadies who could not attend the conference. She talked about “What is spatial data?”. She explained that we can convert the data (which is generally in the CSV format), and represent it as geoJSON. She had used Python for converting the file format to geoJSON. The next step is to upload converted file to github, have a look at it, and use JavaScript for the actual map rendering. She had also showed us a cool example of a CV using this.

December PyLadies meetup

Our December meetup witnessed the similar trend as the November meetup, more number of female participants than men. For this meet up, we planned to have a Python 101 session. Trishna was suppose to take the session. But she could not join as she had to stand in the queue in the bank for getting some cash (the demonetization effect). So Kushal took the session. It was a very useful session, specially for the new people in our group, we could revise our Python skills.

Then it was time for a photograph. The photograph of a group, where women are the majority, and yes we do code in Python.

by Anwesha Das at December 20, 2016 04:57 AM

December 07, 2016

Farhaan Bukhsh

Functional Programming 101

“Amazing!”  that was my initial reaction when I heard and read about functional programming , I am very new to the whole concept so I might go a little off while writing about it so I am open to criticism .  This is basically my understanding about functional programming and why I got hooked to it .

Functional Programming is a concept just like Object Oriented Programming , a lot of people confuse these concept and start relating to a particular language , thing that needs to be clear is languages are tools to implement concepts. There is imperative programming where you tell the machine what to do ? For example

  1. Assign x to y
  2. Open a file
  3. Read a file

While when we specifically talk about FP it is a way to tell how to do things ? The nearest example that I can come up with is SQL query  where you say something like

SELECT  * FROM Something where bang=something and bing=something

Here we didn’t tell what to do but we told how to do it. This is what I got as a gist of functional programming where we divide our task into various functional parts and then we tell how things have to be implemented on the data.

Some of the core concepts that I came across was pure functions and functions treated as first class citizen or first class object . What each term means  lets narrow it down .

Pure functions  is a function whose return value is determined by the input given, the best example of pure functions are Math functions for example Math.sqrt(x) will return the same value for same value of x. Keeping in mind that x will never be altered. Lets go on a tangent and see that how this immutability of x is a good thing, this actually prevents data from getting corrupt.  Okay! That is alot to take in one go, lets understand this with a simple borrowed example from the talk I attended.

We will take example of a simple Library System  now for every library system there should be a book store, the book store here is an immutable data structure now what will happen if I want to add a new book to it ? Since it is immutable I can’t modify it , correct ? So a simple solution to this problem is every time I add or remove a book I will actually deliver a new book store and this new book store will replace the old one. That way I can preserve the old data because hey we are creating a whole new store. This is probably the gist or pros of functional programming.

book_store = ["Da Vinci's Code", "Angles and Demons", "The Lost Symbol"]
def add_book( book_store, book):
    new_book_store = []
    map(lambda old_book: new_book_store.append(old_book), book_store)
    new_book_store.append(book)
    return new_book_store

print add_book(book_store, "Inferno") # ["Da Vinci's Code", "Angles and Demons", "The Lost Symbol", "Inferno"]

print book_store # ["Da Vinci's Code", "Angles and Demons", "The Lost Symbol"]

In the above code you can actually see that a new book store is returned on addition of a new book. This is what a pure function looks like.

Function as first class citizens , I can relate a lot to this because of python where we say that everything is a first class objects. So, basically when we say functions are first class citizen we are implying that functions can be assigned to a variable, passed as a parameter and returned from a function. This is way more powerful then it sounds this bring a lot modular behavior to the software you are writing, it makes the project more organized and less tightly coupled. Which is a good thing in case you want to make quick changes or even feature related big changes.

def find_odd(num):
    return num if(num%2 != 0) else None

def find_even(num):
    return num if(num%2 == 0) else None

def filter_function(number_list, function_filter):
    return [num for num in number_list if(function_filter(num) != None)]

number_list = [1,2,3,4,5,6,7,8,9]
print filter_function(number_list, find_odd) # [1,2,5,7,9]
print filter_function(number_list, find_even) # [2,4,6,8]

In the above code you can see that function is passed as an argument to another function.

I have not yet explored into lambda calculus which I am thinking of getting into . There is a lot more power and beauty in functional programming.  I want to keep this post a quick read so I might cover some code example later, but I really want to demonstrate this code.

def fact(n, acc=1):
    return acc if ( n==1 ) else fact(n-1, n*acc)

where acc=1  this is pure textbook and really beautiful code which calculates factorial of n ,  when it comes to FP it is said To iterate is Human, to recurse is Divine. I will leave you to think more about it, will try to keep writing about things I learn.

Happy Hacking!


by fardroid23 at December 07, 2016 09:07 AM

November 02, 2016

Runa Bhattacharjee

Learning yet another new skill

About 3 weeks ago when the autumn festival was in full swing, away from home, in Bangalore I made my way to a maker space nearby to spend a weekend learning something new. In addition to the thought of spending a lonely weekend doing something new, I was egged on by a wellness initiative at my workplace that encouraged us to find some space away from work. I signed up for a 2-day beginner’s carpentry workshop.

 

workfloor
When I was little, I often saw my Daddy working on small pieces of wood with improvised carving tools to make little figurines or cigarette holders. The cigarette holders were lovely but they were given away many years ago, when he (thankfully) stopped smoking. Some of the little figurines are still around the house, and a few larger pieces made out of driftwood remain in the family home. However, I do not recall him making anything like a chair or a shelf that could be used around the house. In India, it is the norm to get such items made, but by the friendly neighborhood carpenter. Same goes for many other things like fixing leaking taps, or broken electrical switches, or painting a room. There is always someone with the requisite skills nearby who can be hired. As a result, many of us lack basic skills in these matters as opposed to people elsewhere in the world.

 

I did not expect to become an expert carpenter overnight, and hence went with hope that my carpentry skills would improve from 0 to maybe 2, on a scale of 100. The class had 3 other people – a student, a man working in a startup, and a doctor. The instructor had been an employee at a major Indian technology services company, and now had his own carpentry business and these classes. He had an assistant. The space was quite large (the entire ground floor of the building) and had the electronics lab and woodwork section.

 

We started off with an introduction to several types of soft and hardwood, and plywoods. Some of them were available in the lab as they were going to be used during the class, or were stored in the workshop. Rarer wood like mahogany, and teak  were displayed using small wooden blocks. We were going to use rubber wood, and some plywood for our projects. Next, we were introduced to some of the tools – with and without motors. We learnt to use the circular saw, table saw, drop sawjigsaw, power drill and wood router. Being more petite than usual and unaccustomed to such tools, the 400-600w saws were quite terrifying for me at the beginning.

 

clock
The first thing I made was a wall clock shaped like the beloved deer – Bambi. On a 9”x 9” block of rubber wood, I first traced the shape. Then used a jigsaw to cut off the edges and make the shape. Then used the drill to make some holes and create the shapes for eyes and spots. The sander machine was eventually used to smoothen the edges. This clock is now proudly displayed on a wall at my Daddy’s home very much like my drawings from age 6.

 

shelfNext, we made a small shelf with dado joints that can be hung up on the wall. We started off with a block of rubber wood about 1’6’’ x 1’. The measurements for the various parts of this shelf was provided on a piece of paper and we had to cut the pieces using the table saw, set to the appropriate width and angle. The place where the shelves connected with the sides were chiseled out and smoothed with a wood router. The pieces were glued together and nailed. The plane and sander were used to round the edges.

 

The last project for the day was to prepare the base for a coffee table. The material was a block of  pinewood 2 inches thick and 2’ x  1’. We had to first cut these blocks from a bigger block, using the circular saw. Next, these were taken to the table saw to make 5 long strips of 2 inch width. 1 of these strips had about 1/2 inch from the edges narrowed down into square-ish pegs to fit into the legs of the table. The legs had some bits of the center hollowed out to be glued together into X shapes. These were left overnight to dry and next morning, with a hammer and chisel, the holes were made into which the pegs of the central bar could be connected. Finally, the drop saw was used to chop off the edges to make the table stand correctly. I was hoping to place a plywood on top of this base to use as a standing desk. However, it may need some more chopping to be made into the right height.

 

trayThe final project was an exercise for the participants to design and execute an item using a 2’ x 1’ piece of plywood. I chose to make a tray with straight edges using as much of the plywood I could. I used the table saw to cut the base and sides. The smaller sides were tapered down and handles shaped out with a drill and jigsaw. These were glued together and then nailed firmly in place.

 

By the end of the 2nd day, I felt I was more confident handling the terrifying, but surprisingly safe, pieces of machinery. Identifying different types of wood or making an informed decision when selecting wood may need more practise and learning. The biggest challenge that I think I will face if I had to do more of this, is of workspace. Like many other small families in urban India, I live in an apartment building high up the floors, with limited space. This means that setting up an isolated area for a carpentry workbench would not only take up space, but without an enclosure it will cause enough particle matter to float around a living area. For the near future, I expect to not acquire any motorized tools but get a few manual tools that can be used to make small items (like storage boxes) with relative ease and very little disruption.

by runa at November 02, 2016 08:09 AM