Planet dgplug

January 17, 2019

Jason Braganza (Personal)

The Nicest Thank You Note, Ever

Thank you notes like these only make you fall in love with the folks who do the work.
And make you want to support them even more!

Thank you, Dan Carlin.
For all you do.

Brittany Durbin britt@dancarlin.com
5:26 AM (4 hours ago)
to me

Mario,

When people ask us how we fund our operations around here, I usually tell them about our “global street performer” business model.
A long time ago I realized that there's probably not a whole lot of meaningful difference between what I do and what a violin player who finds a nice location on a street corner somewhere, opens up his/her violin case and begins playing does.
We are both relying on “passers-by” throwing a few coins into the instrument case (or baseball cap as the case may be, haha) to keep us going.
Of course, I work a very busy, global “street corner” (virtually speaking, right?).

I want to thank you for taking the time to both listen to the work that we do, and to contribute to our ability to keep doing it. It's a cliché, but we really WOULDN'T be able to do this without the audience's help and support.
Not just in terms of finances, but also by telling others about the shows and spreading the word to help us grow the listenership. You all have been awesome.

So thank you from all of us (and from the other listeners who enjoy the work as well, but can't afford to help right now).
If everyone did as you did, we'd never have to stop doing this.

So, a thousand thanks. I hope we always live up to your expectations.

Warmly as Heck,

-Dan

P.S. If you enjoy what I write, go subscribe


by Mario Jason Braganza at January 17, 2019 04:15 AM

January 13, 2019

Jason Braganza (Personal)

The Final Word on Building Habits – Atomic Habits

If you want to build a habit, this is the definitive book on the topic. 1 You could read about habits in other books, to learn more, but if you actually want to be building them, look no further.

This was the first book in a long time that moved me to actually take action. Succint, pithy and packed with advice, there isn’t a wasted word in its 300 odd pages. And unlike other, it does not feel like three-hundred-pages. Moving from introduction to positing its arguments to tactical advice to conclusion, this feels more like a fast paced novel.

On we go to the things that moved me.

Read more… (22 min remaining to read)

by Mario Jason Braganza at January 13, 2019 04:08 PM

January 12, 2019

Jaysinh Shukla

Python 3.7 feature walkthrough

In this post, I will explain improvements done in Core Python version 3.7. Below is the outline of features covered in this post.

  • Breakpoints

  • Subprocess

  • Dataclass

  • Namedtuples

  • Hash-based Python object file

breakpoint()

Breakpoint is an extremely important tool for debugging. Since I started learning Python, I am using the same API for putting breakpoints. With this release, breakpoint() is introduced as a built-in function. Because it is in a built-in scope, you don’t have to import it from any module. You can call this function to put breakpoints in your code. This approach is handier than importing pdb.set_trace().

Breakpoint function in Python 3.7

Code used in above example

for i in range(100):
    if i == 10:
        breakpoint()
    else:
        print(i)

PYTHONBREAKPOINT

There wasn’t any handy option to disable or enable existing breakpoints with a single flag. But with this release, you can certainly reduce your pain by using PYTHONBREAKPOINT environment variable. You can disable all breakpoints in your code by setting the environment variable PYTHONBREAKPOINT to 0.

Breakpoint environment variable in Python 3.7

I advise putting “PYTHONBREAKPOINT=0” in your production environment to avoid unwanted pausing at forgotten breakpoints

Subprocess.run(capture_output=True)

You can pipe the output of Standard Output Stream (stdout) and Standard Error Stream (stderr) by enabling capture_output parameter of subprocess.run() function.

subprocess.run got capture_output parameter

You should note that it is an improvement over piping the stream manually. For example, subprocess.run(["ls", "-l", "/var"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) was the previous approach to capture the output of stdout and stderr.

Dataclasses

The new class level decorator @dataclass introduced with the dataclasses module. Python is well-known for achieving more by writing less. It seems that this module will receive more updates in future which can be applied to reduce significant line of code. Basic understanding of Typehints is expected to understand this feature.

When you wrap your class with the @dataclass decorator, the decorator will put obvious constructor code for you. Additionally, it defines a behaviour for dander methods __repr__(), __eq__() and __hash__().

Dataclasses.dataclass

Below is the code before introducing a dataclasses.dataclass decorator.

class Point:

    def __init__(self, x, y):
        self.x = x
        self.y = y

After wrapping with @dataclass decorator it reduces to below code

from dataclasses import dataclass


@dataclass
class Point:
    x: float
    y: float

Namedtuples

The namedtuples are a very helpful data structure, yet I found it is less known amongst developers. With this release, you can set default values to argument variables.

Namedtuples with default arguments

Note: Default arguments will be assigned from left to right. In the above example, default value 2 will be assigned to variable y

Below is the code used in the example

from collections import namedtuple


Point = namedtuple("Point", ["x", "y"], defaults=[2,])
p = Point(1)
print(p)

.pyc

.pyc are object files generated everytime you change your code file (.py). It is a collection of meta-data created by an interpreter for an executed code. The interpreter will use this data when you re-execute this code next time. Present approach to identify an outdated object file is done by comparing meta fields of source code file like last edited date. With this release, that identification process is improved by comparing files using a hash-based approach. The hash-based approach is quick and consistent across various platforms than comparing last edited dates. This improvement is considered unstable. Core python will continue with the metadata approach and slowly migrate to the hash-based approach.

Summary

  • Calling breakpoint() will put a breakpoint in your code.

  • Disable all breakpoints in your code by setting an environment variable PYTHONBREAKPOINT=0.

  • subprocess.run([...], capture_output=True) will capture the output of stdout and stderr.

  • Class level decorator @dataclass will define default logic for constructor function. It will implement default logic for dunder methods __repr__(), ___eq__() and __hash__().

  • Namedtuple data structure supports default values to its arguments using defaults.

  • Outdated Python object files (.pyc) are compared using the hash-based approach.

I hope you were able to learn something new by reading this post. If you want to read an in-depth discussion on each feature introduced in Python 3.7, then please read this official post. Happy hacking!

Proofreaders: Jason Braganza, Ninpo, basen_ from #python at Freenode, Ultron from #python-offtopic at Freenode, up|ime from ##English at Freenode

by Jaysinh Shukla at January 12, 2019 07:53 PM

Jason Braganza (Work)

How do we protect our work? How do we get paid for it?

How do we protect our work? How do we get paid for it?
(Or is that really the question we should be worried about when seeking to make our mark. And the importance of writing, of showing up, regularly.)

This is what I admire about Seth Godin. His unique ability to get to the heart of the question.

The question lies in the q & a after this really awesome episode at around the 22.45 mark. (The episode is a replay of this awesome talk. If you haven’t seen or heard it yet, do me (and yourself) a favour and do so.)

Hey Seth, it’s Ben from New York.
I was intrigued by the recent episode about copyright.
My question is … maybe more posing a paradox, because with copyright, there is this corporate ability for greed and control … at the same time for an individual producer or artist or maker of things, it does allow you survival.
And I do agree that the best way to change the culture and to share ideas is to make something you’ve made, widely available. At the same time the concept of copyright does allow you to say to somebody, “Hey, I made this! You’re giving it away for free!”
And in this digital age, where people expect to just click on something and have it, which is sort of like your bakery analogy, except people can now, because of the anonymity and the ease of the digital platforms, walk into a bakery, grab a loaf of bread and walk out, is how to allow ideas to spread in a wide and inexpensive or free way and still be able to make a living at it, without saying, here is a physical thing that you’re taking from me.
Please pay me for it.

And this is what I think, copyright allows an individual or an artist or an entrepreneur like myself to use as leverage so that our stuff like … you mentioned your audiobook being illegally uploaded to youtube … keeping that sort of thing from happening.

Anyway thanks so much for the book, the podcast, the blog. It’s been a great inspiration for me trying to find a way in this new age. Thanks.

Seth answers,

You’re getting at something powerful with this question, which is back to Tim O’Reilly’s comment that the enemy is not piracy. It’s obscurity.
That if you are a nascent artist, designer, writer, video producer, musician, does it pay to give your stuff away?
to give it away? give it away? give it away?
Hoping, that one day you’ll get paid for your work.
So the copyright laws are sort of secondary here, in the sense that, it is voluntary on your part, that as someone who is publishing your own work in a digital format, which means it does not cost you anything to give away one more copy, the question is, when does it end?
Does it mean that everything that is digital, will sooner or later be free?
Well, we’ve seen twenty or thirty years of this unfolding, and here’s what I think we found.
One, Ideas that spread, win.
If your idea reaches more people, you do better than if it doesn’t, and it turns out that ideas that are free spread further and faster, than ideas that arent.
So radio, it was so powerful on radio, that the record labels paid money, payola, bribes, to the radio stations to play the songs for free, because they understood, that being a hit, being popular, was the way for an artist to make money going forward.
The thing is, that doesn’t pay the bills.

So how is it that someone who creates digital items is ever going to get paid?
Well, let me give you a couple of ways this could happen.
The first one is, the souvenir edition. The souvenir, concrete, limited edition of the thing you make, so that the true fan, the superfan will happily and eagerly pay for it.
We keep seeing this thing happening. It’s not going away.
People want to pay for something, others can’t have.
They want to pay for something that gives them status.

Number two is the idea that we can sell the specific.
So we can go to people and say, “Yea, if you want the traditional version of this song, or this digital artifact, that’s free. It’s in the world because it’s popular, but, if you want it to be specific to you, if you want us to play it live for you, that, that’s gonna cost money.”
And we certainly see that in the world of consulting.
So that you can give away a 300 page or 200 page or 20 word BIG idea, just give it away constantly, but if someone wants your specific advice, that, that’s going to cost money.

And the third way, that I’m going to propose that we can charge for the work we do, is that it can be now. That if you want it now, if you want it live, if you want it first, that costs money.
People will wait in line, because again they get status, from going first.

So it’s not really the answer to your question. I’m not proposing that copyright go away, but I do think that individual creators have a huge unfair advantage over institutions that need to pay big bills.
And that advantage is that we can give ideas away.
A blog post a day.
A podcast a week.
We can give them away, because the digital environment makes that a powerful way to spread our ideas, but then we can sell the other thing to people who want to pay for it.

P.S. If you enjoy reading my posts, share them with your friends. And tell them to subscribe!


by Mario Jason Braganza at January 12, 2019 04:59 AM

January 08, 2019

Jason Braganza (Work)

William Vincent’s list of programming books for 2019

Will Vincent, author of Django for Beginners and Rest APIs with Django has his list of book recommendations for the year.
Read the latest posts on his website to get at them.

If you are a learner like me and wanted a professionally filtered list, (as in too lazy to go hunt them down), this is a godsend.
He covers books on Django, React, Flask, & JavaScript and tutorials for Python, Django & React.

Also check out his year in review.

Thank you muchly, Will.


by Mario Jason Braganza at January 08, 2019 03:15 AM

January 04, 2019

Kushal Das

2018 blog review

Last year, I made sure that I spend more time in writing, mostly by waking up early before anyone else in the house. The total number of posts was 60, but, that number came down to 32 in 2018. The number of page views were though 88% of 2017.

I managed to wake up early in most of the days, but, I spent that time in reading and experimenting with various tools/projects. SecureDrop, Tor Project, Qubes OS were in top of that list. I am also spending more time with books, though now the big problem is to find space at home to keep those books properly.

I never wrote regularly through out the year. If you see the dates I published, you will find that sometimes I managed to publish regularly for a month and then again vanished for sometime.

There was a whole paragraph here about why I did not write and vanish, but, then I deleted the paragraph before posting.

You can read the last year’s post on the same topic here.

by Kushal Das at January 04, 2019 02:03 AM

December 26, 2018

Shakthi Kannan

Ansible deployment of Jenkins

[Published in Open Source For You (OSFY) magazine, August 2017 edition.]

Introduction

In this sixth article in the DevOps series, we will install Jenkins using Ansible and set up a Continuous Integration (CI) build for a project that uses Git. Jenkins is Free and Open Source automation server software that is used to build, deploy and automate projects. It is written in Java and released under the MIT license. A number of plugins are available to integrate Jenkins with other tools such as version control systems, APIs and databases.

Setting it up

A CentOS 6.8 Virtual Machine (VM) running on KVM will be used for the installation. Internet access should be available from the guest machine. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.3.0.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/jenkins.yml
ansible/playbooks/admin/uninstall-jenkins.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

jenkins ansible_host=192.168.122.120 ansible_connection=ssh ansible_user=root ansible_password=password

An entry for the jenkins host is also added to the /etc/hosts file as indicated below:

192.168.122.120 jenkins

Installation

The playbook to install the Jenkins server on the CentOS VM is given below:

---
- name: Install Jenkins software
  hosts: jenkins
  gather_facts: true
  become: yes
  become_method: sudo
  tags: [jenkins]

  tasks:
    - name: Update the software package repository
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - java-1.8.0-openjdk
        - git
        - texlive-latex
        - wget

    - name: Download jenkins repo
      command: wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

    - name: Import Jenkins CI key
      rpm_key:
        key: http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
        state: present

    - name: Install Jenkins
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - jenkins

    - name: Allow port 8080
      shell: iptables -I INPUT -p tcp --dport 8080 -m state --state NEW,ESTABLISHED -j ACCEPT

    - name: Start the server
      service:
        name: jenkins
        state: started

    - wait_for:
        port: 8080

The playbook first updates the Yum repository and installs the Java OpenJDK software dependency required for Jenkins. The Git and Tex Live LaTeX packages are required to build our project, github.com/shakthimaan/di-git-ally-managing-love-letters (now at https://gitlab.com/shakthimaan/di-git-ally-managing-love-letters). We then download the Jenkins repository file, and import the repository GPG key. The Jenkins server is then installed, port 8080 is allowed through the firewall, and the script waits for the server to listen on port 8080. The above playbook can be invoked using the following command:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/jenkins.yml -vv

Configuration

You can now open http://192.168.122.120:8080 in the browser on the host to start configuring Jenkins. The web page will prompt you to enter the initial Administrator password from /var/lib/jenkins/secrets/initialAdminPassword to proceed further. This is shown in Figure 1:

Unlock Jenkins

The second step is to install plugins. For this demonstration, you can select the “Install suggested plugins” option, and later install any of the plugins that you require. Figure 2 displays the selected option:

Customize Jenkins

After you select the “Install suggested plugins” option, the plugins will get installed as shown in Figure 3:

Getting Started

An admin user is required for managing Jenkins. After installing the plugins, a form is shown for you to enter the user name, password, name and e-mail address of the administrator. A screenshot of this is shown in Figure 4:

Create First Admin User

Once the administrator credentials are stored, a “Jenkins is ready!” page will be displayed, as depicted in Figure 5:

Jenkins is ready!

You can now click on the “Start using Jenkins” button to open the default Jenkins dashboard shown in Figure 6:

Jenkins Dashboard

An example of a new project

Let’s now create a new build for the github.com/shakthimaan/di-git-ally-managing-love-letters project. Provide a name in the “Enter an item name” text box and select the “Freestyle project”. Figure 7 provides shows the screenshot for creating a new project:

Enter an item name

The next step is to add the GitHub repo to the “Repositories” section. The GitHub https URL is provided as we are not going to use any credentials in this example. By default, the master branch will be built. The form to enter the GitHub URL is shown in Figure 8:

Add GitHub repo

A Makefile is available in the project source code, and hence we can simply invoke “make” to build the project. The “Execute shell” option is chosen in the “Build” step, and the “make clean; make” command is added to the build step as shown in Figure 9.

Build step

From the left panel, you can click on the “Build Now” link for the project to trigger a build. After a successful build, you should see a screenshot similar to Figure 10.

Build success

Uninstall

An uninstall script to remove the Jenkins server is available in playbooks/admin folder. It is given below for reference:

---
---
- name: Uninstall Jenkins
  hosts: jenkins
  gather_facts: true
  become: yes
  become_method: sudo
  tags: [remove]

  tasks:
    - name: Stop Jenkins server
      service:
        name: jenkins
        state: stopped

    - name: Uninstall packages
      package:
        name: "{{ item }}"
        state: absent
      with_items:
        - jenkins

The script can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/admin/uninstall-jenkins.yml

December 26, 2018 01:00 PM

December 10, 2018

Kushal Das

Flatpak application shortcuts on Qubes OS

In my last blog post, I wrote about Flatpak applications on Qubes OS AppVMs. Later, Alexander Larsson pointed out that running the actual application from the command line is still not user friendly, and Flatpak already solved it by providing proper desktop files for each of the application installed by Flatpak.

How to enable the Flatpak application shortcut in Qubes OS?

The Qubes documentation has detailed steps on how to add a shortcut only for a given AppVM or make it available from the template to all VMs. I decided to add it from the template, so that I can click on the Qubes Setting menu and add it for the exact AppVM. I did not want to modify the required files in dom0 by hand. The reason: just being lazy.

From my AppVM (where I have the Flatpak application installed), I copied the desktop file and also the icon to the template (Fedora 29 in this case).

qvm-copy /var/lib/flatpak/app/io.github.Hexchat/current/active/export/share/applications/io.github.Hexchat.desktop
qvm-copy /var/lib/flatpa/app/io.github.Hexchat/current/active/export/share/icons/hicolor/48x48/apps/io.github.Hexchat.png

Then in the template, I moved the files to their correct locations. I also modified the desktop file to mark that this is a Flatpak application.

sudo cp ~/QubesIncoming/xchat/io.github.Hexchat.desktop /usr/share/applications/io.github.Hexchat.desktop
sudo cp ~/QubesIncoming/xchat/io.github.Hexchat.png /usr/share/icons/hicolor/48x48/

After this, I refreshed, and then added the entry from the Qubes Settings, and, then the application is available in the menu.

by Kushal Das at December 10, 2018 10:10 AM

November 29, 2018

Sayan Chowdhury

Fedora AMIs for EC2 Instances (A1) Powered by Arm-Based AWS Graviton Processors

AWS recently announced their new fleet of A1 EC2 instances which is powered by ARM at AWS re:Invent.

Gladly, the Fedora Kernel Team (Laura Abbott, Justin Forbes and Jeremy Cline) and Peter Robinson had already everything in place. The only missing piece was to add the support to fedimg to create arm64 based AMIs.

With the release of fedimg==2.4.0 (thanks to Patrick),  the new AMIs were happily getting uploaded to AWS and with a slight delay, Fedora had the support for the arm64 along with the x86_64 AMIs.

Known issues?

One of the issues which I faced was the instance getting ready takes some time which is almost 5 minutes. But work is on the way to make the experience better.

Availability?

A1 instances are currently supported only in selected regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland).

Also not all the availability zones are supported, so while launching instances you might see this error:

An error occurred (Unsupported) when calling the RunInstances operation: Your requested instance type (a1.large) is not supported in your requested Availability Zone (us-west-2b). Please retry your request by not specifying an Availability Zone or choosing us-west-2c, us-west-2a.

which I found linked to the subnet's availability zone. So, you might need to change the subnet's availability zone to launch the instance.

For any more issues, drop in to #fedora-arm or #fedora-cloud IRC channel on Freenode.

Read the official announcement here.

by Sayan Chowdhury at November 29, 2018 11:20 AM

November 27, 2018

Anwesha Das

Upgraded my blog to Ghost 2.6

I have been maintaining my blog. It is a self hosted Ghost blog, where I have my theme as Casper, the Ghost default. In the recent past, September 2018, Ghost has updated its version to 2.0. Now it is my time to update mine.

It is always advisable to test it before running it into production server. I maintain a stage instance for the same. I test any and all the changes there before touching the production server. I did the same thing here also.

I have exported Ghost data into a Json file. For the ease to read I have prettified the file. I removed the old database and started the container for the new Ghost. I reimported the data into the new Ghost using the json file.

I had another problem to solve, the theme. I used to have Casper as my theme. But the new look of it, is something I do not like for my blog, which is predominantly a text blog. I was unable to fix the same theme for the new Ghost. Therefore I chose to use Attila as my theme. I did some modifications, uploaded and enabled it for my blog. A huge gratitude to the Ghost community and the developers, it was a real smooth job.

by Anwesha Das at November 27, 2018 02:57 PM

November 23, 2018

Sayan Chowdhury

Fedora 29 Bangalore Release Party

23rd November 2018

The Fedora community of Bangalore assembled at the Red Hat Bangalore office. The event was scheduled to start at 1300, but the lunch at the office postponed the event by 45 mins.

Sumantro kicked off the event with a small introduction, following which Vipul gave a introduction of the open source with a short choco chip story.

Sumantro back on stage after that talking about the "What's coming next?", from discussing about GNOME/Pantheon, to Python 2 deprecation, to Ansible, to IoT, to Modularity.

Sinny talked and demoed the shiny new Fedora Silverblue.

Finally, the cupcakes were revealed and the event ended with a group photo.

Vipul sharing the choco chip story
Sumatro on the stage
Sinny demoing Fedora Silverblue
Cupcakes!
Cupcakes!
Sinny aka CoreOS, Silverblue, and KDE
The gathering

by Sayan Chowdhury at November 23, 2018 09:34 AM

November 18, 2018

Anwesha Das

Setting up Qubes OS mirror at dgplug.org

I am trying to work on my sys-admin skills for a some time now. I was already maintaining my own blog, I was pondering to learn
Ansible. DGPLUG was planning to create a new Qubes OS mirror. So I took the opportunity to learn Ansible and I set up a new server.

Qubes OS is the operating system built keeping security in mind. As they like to define it as, “reasonably secure operating system”. It being loved by security professionals, activities worldwide.

The mirror contains for both Debian and rpm packages as used by the Qubes Operating system. The mirror is fully operational and mentioned on the official list of Qubes OS.

by Anwesha Das at November 18, 2018 07:05 PM

November 10, 2018

Robin Schubert

Truth vs. Theory

The dumbest thing you can do is to think you're smart.

We often tend to think we know a lot of things. Things we read, hear or see on whatever source of information may be perceived as just true. However, I think that it is very important to question even the most trivial of best known things. The believe in knowledge does not just kill creativity but can also be dangerous.

I've studied Physics and there is one take-home message that I would like to share. People often hear that physicists have discovered this or that. In most cases this leads to the believe that we know how the world around us works and what it is made of. Actually we don't. The way Physics works is different: It won't tell you how things work or what they are made of, instead it will provide you with a set of tools, models, and theories, derived from observations and previous models and theories, that will often result in pretty good approximations, and predictions of what we're observing in the world around us. This is not better or worse than the truth would be, in fact it's a very pure and straight forward approach that allows us to go far beyond of what seems possible sometimes.

It would in fact be quite optimistic to think that we could understand truth, with the limitations of our nature. We perceive the world in three dimensions, are heavily dependent on language (could write a whole book on that) and have a limited set of senses - but what is worse: we're not even using them. We rely on science and studies instead, loosing more and more the ability to perceive and interpret (and believe in) the signals of our own body. You cannot convince someone that might call himself scientist who knows how thinks work of the efficacy of some compound when you just feel that it is good and right for you. Instead, the compound has to go through several stages of clinical trials, that try to measure safety, tolerability and efficacy in vitro, in animals, in humans. While I understand and appreciate this approach, I often feel like the available tools to assess these domains are not even close to be suitable for that task. As a result, a negative trial will let us know that there is no effect.

It's neither easy nor fun to discuss with someone who is fiercely convinced by something just read in an article. While it's a very good thing to read (or to gather information through other channels), that information should not just be taken for granted because it has been printed in a journal. To question that information at least every once in a while should be a habit.

by Robin Schubert at November 10, 2018 12:00 AM

October 29, 2018

Anu Kumari Gupta (ann)

Enjoy octobers with Hacktoberfest

I know what you are going to do this October. Scratching your head already? No, don’t do it because I will be explaining you in details all that you can do to make this october a remarkable one, by participating in Hacktoberfest.

Guessing what is the buzz of Hacktoberfest all around? 🤔

Hacktoberfest is like a festival celebrated by people of open source community, that runs throughout the month. It is the celebration of open source software, and welcomes everyone irrespective of the knowledge they have of open source to participate and make their contribution.

  • Hacktoberfest is open to everyone in our global community!
  • Five quality pull requests must be submitted to public GitHub repositories.
  • You can sign up anytime between October 1 and October 31.

<<<<Oh NO! STOP! Hacktoberfest site defines it all. Enough! Get me to the point.>>>>

Already had enough of the rules and regulations and still wondering what is it all about, why to do and how to get started? Welcome to the right place. This hacktoberfest is centering a lot around open source. What is it? Get your answer.

What is open source?

If you are stuck in the name of open source itself, don’t worry, it’s nothing other than the phrase ‘open source’ mean. Open source refers to the availability of source code of a project, work, software, etc to everyone so that others can see, modify changes to it that can be beneficial to the project, share it, download it for use. The main aim of doing so is to maintain transparency, collaborative participation, the overall development and maintenance of the work and it is highly used for its re-distributive nature. With open source, you can organize events and schedule your plans and host it onto an open source platform as well. And the changes that you make into other’s work is termed as contribution. The contribution do not necessarily have to be the core code. It can be anything you like- designing, organizing, documentation, projects of your liking, etc.

Why should I participate?

The reason you should is you get to learn, grow, and eventually develop skills. When you make your work public, it becomes helpful to you because others analyze your work and give you valuable feedback through comments and letting you know through issues. The kind of work you do makes you recognized among others. By participating in an active contribution, you also find mentors who can guide you through the project, that helps you in the long run.

And did I tell you, you get T-shirts for contributing? Hacktoberfest allows you to win a T-shirt by making at least 5 contributions. Maybe this is motivating enough to start, right? 😛 Time to enter into Open Source World.

How to enter into the open source world?

All you need is “Git” and understanding of how to use it. If you are a beginner and don’t know how to start or have difficulty in starting off, refer this “Hello Git” before moving further. The article shows the basic understanding of Git and how to push your code through Git to make it available to everyone. Understanding is much more essential, so take your time in going through it and understanding the concept. If you are good to go, you are now ready to make contribution to other’s work.

Steps to contribute:

Step 1; You should have a github account.

Refer to the post “Hello Git“, if you have not already. The idea there is the basic understanding of git workflow and creating your first repository (your own piece of work).

Step 2: Choose a project.

I know choosing a project is a bit confusing. It seems overwhelming at first, but trust me once you get the insights of working, you will feel proud of yourself. If you are a beginner, I would recommend you to first understand the process by making small changes like correcting mistakes in a README file or adding your name to the contributors list. As I already mention, not every contributions are into coding. Select whatever you like and you feel that you can make changes, which will improve the current piece of work.

There are numerous beginner friendly as well as cool projects that you will see labelled as hacktoberfest. Pick one of your choice. Once you are done with selecting a project, get into the project and follow the rest.

Step 3: Fork the project.

You will come across several similar posts where they will give instructions to you and what you need to perform to get to the objective, but most important is that you understand what you are doing and why you are doing. Here am I, to explain you, why exactly you need to perform these commands and what does these terms mean.

Fork means to create a copy of someone else’s repository and add it to your own github account. By forking, you are making a copy of the forked project for yourself to make changes into it. The reason why we are doing so, is that you would not might like to make changes to the main repository. The changes you make has to be with you until you finalize it to commit and let the owner of the project know about it.

You must be able to see the fork option somewhere at the top right.

screenshot-from-2018-10-29-22-10-36.png

Do you see the number beside it. These are the number of forks done to this repository. Click on the fork option and you see it forking as:

Screenshot from 2018-10-29 22-45-09

Notice the change in the URL. You will see it is added in your account. Now you have the copy of the project.

Step 4: Clone the repository

What cloning is? It is actually downloading the repository so that you make it available in your desktop to make changes. Now that you have the project in hand, you are ready to amend changes that you feel necessary. It is now on your desktop and you know how to edit with the help of necessary tools and application on your desktop.

“clone or download” written in green button shows you a link and another option to directly download.

If you have git installed on your machine, you can perform commands to clone it as:

git clone "copied url"

copied url is the url shown available to you for copying it.

Step 5: Create a branch.

Branching is like the several directory you have in your computer. Each branch has the different version of the changes you make. It is essential because you will be able to track the changes you made by creating branches.

To perform operation in your machine, all you need is change to the repository directory on your computer.

 cd  <project name>

Now create a branch using the git checkout command:

git checkout -b 

Branch name is the name given by you. It can be any name of your choice, but relatable.

Step 6: Make changes and commit

If you list all the files and subdirectories with the help of ls command, your next step is to find the file or directory in which you have to make the changes and do the necessary changes. For example. if you have to update the README file, you will need an editor to open the file and write onto it. After you are done updating, you are ready for the next step.

Step 7: Push changes

Now you would want these changes to be uploaded to the place from where it came. So, the phrase that is used is that you “push changes”. It is done because after the work i.e., the improvements to the project, you will be willing to let it be known to the owner or the creator of the project.

so to push changes, you perform as follows:

git push origin 

You can reference the URL easily (by default its origin). You can alternatively use any shortname in place of origin, but you have to use the same in the next step as well.

Step 8: Create a pull request

If you go to the repository on Github, you will see information about your updates and beside that you will see “Compare and pull request” option. This is the request made to the creator of the main project to look into your changes and merge it into the main project, if that is something the owner allows and wants to have. The owner of the project sees the changes you make and do the necessary patches as he/she feels right.

And you are done. Congratulations! 🎉

Not only this, you are always welcome to go through the issues list of a project and try to solve the problem, first by commenting and letting everyone know whatever idea you have to  solve the issue and once you are approved of the idea, you make contributions as above. You can make a pull request and reference it to the issue that you solved.

But, But, But… Why don’t you make your own issues on a working project and add a label of Hacktoberfest for others to solve?  You will amazed by the participation. You are the admin of your project. People will create issues and pull requests and you have to review them and merge them to your main project. Try it out!

I  hope you find it useful and you enjoyed doing it.

Happy Learning!

by anuGupta at October 29, 2018 08:20 PM

October 22, 2018

Sanyam Khurana

Event Report - DjangoCon US

If you've already read about my journey to PyCon AU, you're aware that I was working on a Chinese app. I got one more month to work on the Chinese app after PyCon AU, which meant improving my talk to have more things such as passing the locale info in async tasks, switching language in templates, supporting multiple languages in templates etc.

I presented the second version of the talk at DjangoCon US. The very first people I got to see again, as soon as I entered DjangoCon US venue were Russell and Katie from Australia. I was pretty much jet-lagged as my International flight got delayed by 10 hours, but I tried my best to deliver the talk.

Here is the recording of the talk:

You can see the slides of my talk below or by clicking here:

After the conference, we also had a DSF meet and greet, where I met Frank, Rebecca, Jeff, and a few others. Everyone was so encouraging and we had a pretty good discussion around Django communities. I also met Carlton Gibson, who recently became a DSF Fellow and also gave a really good talk at DjangoCon on Your web framework needs you!.

Carol, Jeff, and Carlton encouraged me to start contributing to Django, so I was waiting eagerly for the sprints.

DjangoCon US with Mariatta Wijaya, Carol Willing, Carlton Gibson

Unfortunately, Carlton wasn't there during the sprints, but Andrew Pinkham was kind enough to help me with setting up the codebase. We were unable to run the test suite successfully and tried to debug that, later we agreed to use django-box for setting things up. I contributed few PRs to Django and was also able to address reviews on my CPython patches. During the sprints, I also had a discussion with Rebecca and we listed down some points on how we can lower the barrier for new contributions in Django and bring in more contributors.

I also published a report of my two days sprinting on Twitter:

DjangoCon US contributions report by Sanyam Khurana (CuriousLearner)

I also met Andrew Godwin & James Bennett. If you haven't yet seen the Django in-depth talk by James I highly recommend you to watch that. It gave me a lot of understanding on how things are happening under the hood in Django.

It was a great experience altogether being an attendee, speaker, and volunteer at DjangoCon. It was really a very rewarding journey for me.

There are tons of things we can improve in PyCon India, taking inspiration from conferences like DjangoCon US which I hope to help implement in further editions of the conference.

Here is a group picture of everyone at DjangoCon US. Credits to Bartek for the amazing click.

DjangoCon US group picture

I want to thank all the volunteers, speakers and attendees for an awesome experience and making DjangoCon a lot of fun!

by Sanyam Khurana at October 22, 2018 06:57 AM