Planet dgplug

August 19, 2019

Jason Braganza (Work)

Escape the Algorithm!

Seeing as you folks are reading my newsletter, I know I am preaching to the choir, but this article, summarises my thoughts on social media excellently!

from the Art of Manliness,

At first, it wasn’t so bad. But then I started noticing that I wasn’t seeing all the updates from pages I followed on Facebook.
Come to find out, Facebook started changing their News Feed algorithm so that only the content Facebook thought you’d be interested in the most showed up in your feed. Facebook claimed they were just trying to help users sift through the firehose of information being blasted at them. Critics argued Facebook was just trying to keep people more engaged on Facebook because that makes money for Facebook. And that they were trying to force pages to pay money for their content to show up in the News Feeds they had once shown up in organically.
I was just ticked that I wasn’t seeing all the stuff from Facebook pages that I had deliberately opted into getting updates from.

Twitter added changes to their algorithm that boosted tweets to the top of your timeline based on what they thought you’d want to see. Again, Twitter claimed they were trying to be helpful. Critics argued it was just a ploy for users to engage with and stay on Twitter longer (which makes Twitter more money).
I was miffed some algorithm was deciding what I saw.

and

Besides the comments, there are those other little signals on social media that can end up skewing what you think of something: likes, RTs, faves, hearts.

And come to find out, a lot of these “one-bit indicators” (as Digital Minimalism author Cal Newport calls them) are coming from bots. Not from actual people. A lot of social media is fake. Hype.

The benefits? Here’s Brett again,

I see the content I want to see.
Am I interested in everything Marginal Revolution puts out? Of course not, but instead of some stupid social media algorithm trying to predict whether I’ll be interested in a piece of content or not, I get to decide whether I’m interested in it or not. It’s nice being in complete control of my media consumption again.

I no longer see other people’s opinions about content before I consume said content.
When you subscribe to a site’s RSS feed, you just see the content. That’s it. There are no comments or social media feedback about that content. Instead of the hot take of some internet stranger tainting how I read something, I read it completely unfiltered and come to my own conclusions.
Reading content without the social media commentary is a way to practice self-reliance. Instead of relying on other people to help you figure out what you think of something, you get to figure that out yourself. You’re in charge, and being in charge of your opinions feels good.

I spend less time online.
AoM podcast guest John Zeratsky calls Facebook, Twitter, and Instagram “infinity pools.” They’re apps in which the content is continually refreshed, and thus has no “end.” You might use Twitter to follow some “thought leader” you enjoy, but besides the stuff he puts out, you’re also presented with all the comments that his followers append to his tweets. There’s a constant stream of new content and commentary on Twitter, and as our brains desire novelty, that makes the platform massively appealing to check over and over again. You’re never done reading content on social media.
Now that I just consume my content via RSS or email, I’ve found myself spending less time online. You just read the article and you’re done. There’s some finitude to it.

I rest easy knowing that social media companies have less data on me.
Social media companies don’t charge you money to use their services, but that doesn’t mean the services are “free.” Instead of exchanging money, you hand over gobs of personal and private information about yourself, which allows social media companies to sell ads targeted to your personal dossier.
What’s more, these companies (particularly Facebook) have a lousy track record of keeping your private information private.
The rugged, individualistic, keep-out-of-my-business side of myself treasures his privacy. I like for other people or companies to not know what’s going on in all facets of my life. While I’ll likely never be able to completely eliminate my digital footprint, reducing my social media use can significantly shrink it.

I’m happier.
One of the things I’ve noticed about not using social media is that I just feel happier.
First, because I’m spending less time online I have more time to do things I enjoy in real life.
Second, because I don’t see the opinions of the masses on RSS or email, I don’t expose myself to all the negativity that plagues social media. I’ve noticed I’m less pissy the less I’m exposed to the low-grade fever of anger that constantly brews online.
Third, social media can really skew what your brain considers important. If everyone on Twitter was talking about it, it had to be important, right? Not really.
Now that I’m off social media, my brain’s bandwidth is no longer clogged up with all that faux-important social media garbage. My attention is focused on the stuff that’s really important: family, friends, health, spirituality, and of course, barbell training.

Who doesn’t want to be happier?
So the best use of our time, is probably to quit social media.
How then to keep up with things that interest us?
With old fashioned tools.
Email & RSS readers.
Subscribe to interesting newsletters, (cough, like mine, cough)
Use a feed reader to keep up with interesting sites.

Algorithms control what you see, and as what you pay attention to becomes your reality, algorithms create your reality. If you want to program your own reality, rather than having it programmed by corporate computer coding, then escape the algorithm, escape social media, take the training wheels off your online content consumption, and ride it in a more direct, autonomous, liberated way.

P.S. The whole article, is well worth your time!
P.P.S. After reading this, don’t you think, you need to forward my mails to your friends, and tell them to subscribe? :)


by Mario Jason Braganza at August 19, 2019 12:15 AM

August 18, 2019

Priyanka Saggu

The Simplified playbook!

August 18, 2019

Ah, I literally procastinated a lot for writing this blog post. But I, “actually”, am not regretting at all for this once, because the time was justly spent well with my parents & family.

Anyways, before moving forward with other tasks in the series, I am supposed to finish the backlogs (finish writing my 2 blogposts, including this one).

And therefore, let me quickly describe the reason for writing this post.

This post doesn’t actually have a distinct topic rather it’s just an update/improvement to one of my last blogpost “A guide to a “safer” SSH!” . Back there, I was fairly doing every errand by writing separate individual tasks. For instance, when I was supposed to make changes in sshd_config file, I used an approach to find the intended lines using “regex” and replace each one of them individually with the new required configurations. Similar was the case while writing iptables rule through ansible playbook on a remote machine.

But these individual execution of co-related tasks was making the whole ansible implementation/deployment process extremely time-consuming and the ansible playbook itself look unneccesarily lengthy and complex. Thus, the real idea of writing these playbooks to automate stuffs in a faster and easier manner, proved to be pretty much worthless in my case.

So, here I am taking over kushal’s advice of improving these ansible playbooks to achieve simplicity and better optimized execution time. The whole idea is to compile up these co-related tasks (for example, making changes in sshd_config file for the purpose of SSH-hardening) in a single file and copy this file to the intended location/path/directory on the remote node/server.

Let me quickly walk you through some simple hands-on examples to make the idea more precise and understand in action. (We will be improving our existing ansible playbook only)

  • So, earlier in the post, while writing the “ssh role“, our tasks looked something like this:
---
# tasks file for ssh
- name: Add local public key for key-based SSH authentication
  authorized_key:
          user: "{{username}}"
          state: present
          key: "{{ lookup('file', item) }}"
  with_fileglob: public_keys/*.pub
- name: Harden sshd configuration
  lineinfile:    
          dest: /etc/ssh/sshd_config    
          regexp: "{{item.regexp}}"    
          line: "{{item.line}}"
          state: present
  with_items:
    - regexp: "^#?PermitRootLogin"
      line: "PermitRootLogin no"
    - regexp: "^^#?PasswordAuthentication"
      line: "PasswordAuthentication no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
    - regexp: "^#?AllowTcpForwarding"
      line: "AllowTcpForwarding no"
    - regexp: "^#?MaxAuthTries"
      line: "MaxAuthTries 2"
    - regexp: "^#?MaxSessions"
      line: "MaxSessions 2"
    - regexp: "^#?TCPKeepAlive"
      line: "TCPKeepAlive no"
    - regexp: "^#?UseDNS"
      line: "UseDNS no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
- name: Restart sshd
  systemd:
          state: restarted    
          daemon_reload: yes
          name: sshd
...

And if we observe closely at the last second task, we are altering each intended line of the sshd_config file in an individual fashion which is definitely not required. Rather the changes could be made at once, in a new copied file of the existing “sshd_config” file and thus sent to the remote node at the required location/path/directory.

This copied sshd_file will reside in the “files/” directory of our “ssh role“.

├── ssh
│   ├── defaults
│   │   └── main.yml
│   ├── files   👈(HERE)
│   ├── handlers
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── README.md
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   ├── tests
│   │   ├── inventory
│   │   └── test.yml
│   └── vars
│       └── main.yml
  • Copy the local sshd_config file to this files/ directory.
# like in my case, the ansible playbook is residing at "/etc/ansible/playbooks/"
$ sudo cp /etc/ssh/sshd_config /etc/ansible/playbooks/ssh/files/
  • And then make the required changes in this file as specified in the last second task of our old “ssh role“.
  • Finally modify the “ssh role” by replacing the last second task with the task of copying this file in the remote node at “/etc/ssh/” directory path thus removing the un-neccessary recursive steps.
  • Now, the new “ssh role” would look like the following.
---
# tasks file for ssh
- name: Add local public key for key-based SSH authentication
  authorized_key:
          user: "{{username}}"
          state: present
          key: "{{ lookup('file', item) }}"
  with_fileglob: public_keys/*.pub
- name: Copy the modified sshd_config file to remote node's /etc/ssh/ directory.
  copy:
    src: /etc/ansible/playbooks/ssh/files/sshd_config
    dest: /etc/ssh/sshd_config
    owner: root
    group: root
    mode: 0644
- name: Restart sshd
  systemd:
          state: restarted    
          daemon_reload: yes
          name: sshd
...

And we are done. This will execute considerably much faster than the old ansible role and looks comparatively much simpler as well.


Similar improvements can be made in case of “iptables role” as well.

Our old “iptables role” looked something like this:

---
# tasks file for iptables
- name: Install the `iptables` package
  package:
    name: iptables
    state: latest
- name: Flush existing firewall rules
  iptables:
    flush: true
- name: Firewall rule - allow all loopback traffic
  iptables:
    action: append
    chain: INPUT
    in_interface: lo
    jump: ACCEPT
- name: Firewall rule - allow established connections
  iptables:
    chain: INPUT
    ctstate: ESTABLISHED,RELATED
    jump: ACCEPT
- name: Firewall rule - allow port ping traffic
  iptables:
    chain: INPUT
    jump: ACCEPT
    protocol: icmp
- name: Firewall rule - allow port 22/SSH traffic
  iptables:
    chain: INPUT
    destination_port: 22
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - allow port80/HTTP traffic
  iptables:
    chain: INPUT
    destination_port: 80
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - allow port 443/HTTPS traffic
  iptables:
    chain: INPUT
    destination_port: 443
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - drop any traffic without rule
  iptables:
    chain: INPUT
    jump: DROP
- name: Firewall rule - drop any traffic without rule
  iptables:
    chain: INPUT
    jump: DROP
- name: Install `netfilter-persistent` && `iptables-persistent` packages
  package:
      name: "{{item}}"
      state: present
  with_items:
     - iptables-persistent
     - netfilter-persistent
...
  • In order to simplify it, Create a new file, named “rules.v4″ in the “files/” directory of “iptables role” and paste the following iptables rule in there.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [151:12868]
:sshguard - [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -j DROP
COMMIT
  • And the final step would be same as above role, ie. copying this “rules.v4” file in the “/etc/iptables” directory of the remote node.
  • So, the new improved “iptables role” will now look like the following.
---
# tasks file for iptables
- name: Install the `iptables` package
  package:
    name: iptables
    state: latest
- name: Flush existing firewall rules
  iptables:
    flush: true
- name: Inserting iptables rules in the "/etc/iptables/rules.v4" file.
  copy:
    src: /etc/ansible/playbooks/iptables/files/rules.v4
    dest: /etc/iptables/rules.v4
    owner: root
    group: root
    mode: 0644
- name: Install `netfilter-persistent` && `iptables-persistent` packages
  package:
      name: "{{item}}"
      state: present
  with_items:
     - iptables-persistent
     - netfilter-persistent
...

That’s all about this quick blogpost on how to efficiently write recursive co-related tasks in an ansible playbook.

Hope it helped.

Till next time. o/

[Note:- I will link this article in the old post as an update.]

by priyankasaggu119 at August 18, 2019 11:16 PM

August 17, 2019

Jason Braganza (Do The Work)

French, Week 15

Some weeks, you feel like you go through the motions.
This was not one of those weeks.

Made a small breakthrough that helps me now remember words faster, when I add images to my app.
Now in addition to the images for the word, I also add an image that might help me remember how the word actually ‘sounds’.

For example, this is word I have to learn. Singe.
Singe is monkey, in French.
So here’s a monkey … of sorts.

— image courtesy Max Pixel

The thing with ‘this’ monkey though is that, he’s singing.
And my brain goes … hmmm … singing monkey … singing … sing … Singe!

Like Trinity says, “That’s a nice trick!” :)

Will do this with all new cards now.
And the old ones that I find hard to remember.


by Mario Jason Braganza at August 17, 2019 12:15 AM

August 15, 2019

Kushal Das

git checkout to previous branch

We regularly move between git branches while working on projects. I always used to type in the full branch name, say to go back to develop branch and then come back to the feature branch. This generally takes a lot of typing (for the branch names etc.). I found out that we can use - like in the way we use cd - to go back to the previous directory we were in.

git checkout -

Here is a small video for demonstration.

I hope this will be useful for some people.

August 15, 2019 11:26 AM

August 13, 2019

Priyanka Saggu

“Warning” to SSH users!

August 13, 2019

Okay, not to worry. There is no such actual warning. 😀

I am just trying to extend my previous post on securing SSH by adding another one-liner (actually it was one liner before I decided to implement it with ansible again) solution to it.

And the solution is:

You can add “warning banners” for the incoming nodes which are trying to establish a SSH connection to your concerned nodes. These banners will give a proper insight of the guidelines and the measures, the authorities are imposing on users to ensure their server’s safety and security.

me 😛
  • Expanding the same ansible playbook we built in the last post, edit the file “/playbook/ssh/tasks/main.yml” to add the following lines in there.
    • The new tasks will do the following:
      • Find the line “#Banner none” in sshd_config file and replacing it with “Banner /etc/issue”.
      • Copy the contents of “ssh/templates/issue” file to remote node’s “/etc/issue” file.
      • And finally restart the ssh service daemon again to reflect the changes.
- regexp: "^#?Banner none"
  line: "Banner /etc/issue"

- name: Copy the banner issue file in remote node
  copy:
    src: /etc/ansible/playbooks/ssh/templates/issue
    dest: /etc/issue
    owner: root
    group: root
    mode: 0644

After adding the above lines, the actual “ssh” ansible role will now look like this:

---
# tasks file for ssh
- name: Add local public key for key-based SSH authentication
  authorized_key:
          user: "{{username}}"
          state: present
          key: "{{ lookup('file', item) }}"
  with_fileglob: public_keys/*.pub
- name: Harden sshd configuration
  lineinfile:
          dest: /etc/ssh/sshd_config
          regexp: "{{item.regexp}}"
          line: "{{item.line}}"
          state: present
  with_items:
    - regexp: "^#?PermitRootLogin"
      line: "PermitRootLogin no"
    - regexp: "^^#?PasswordAuthentication"
      line: "PasswordAuthentication no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
    - regexp: "^#?AllowTcpForwarding"
      line: "AllowTcpForwarding no"
    - regexp: "^#?MaxAuthTries"
      line: "MaxAuthTries 2"
    - regexp: "^#?MaxSessions"
      line: "MaxSessions 2"
    - regexp: "^#?TCPKeepAlive"
      line: "TCPKeepAlive no"
    - regexp: "^#?UseDNS"
      line: "UseDNS no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
    - regexp: "^#?Banner none"
      line: "Banner /etc/issue"

- name: Copy the banner issue file in remote node
  copy:
    src: /etc/issue
    dest: /etc/issue
    owner: root
    group: root
    mode: 0644

- name: Restart sshd
  systemd:
          state: restarted    
          daemon_reload: yes
          name: sshd
...
  • The contents of “/etc/ansible/playbooks/ssh/templates/issue” can be written like the following example template (This example template is taken from here.)
----------------------------------------------------------------------------------------------
You are accessing a XYZ Government (XYZG) Information System (IS) that is provided for authorized use only.
By using this IS (which includes any device attached to this IS), you consent to the following conditions:

+ The XYZG routinely intercepts and monitors communications on this IS for purposes including, but not limited to,
penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM),
law enforcement (LE), and counterintelligence (CI) investigations.

+ At any time, the XYZG may inspect and seize data stored on this IS.

+ Communications using, or data stored on, this IS are not private, are subject to routine monitoring,
interception, and search, and may be disclosed or used for any XYZG authorized purpose.

+ This IS includes security measures (e.g., authentication and access controls) to protect XYZG interests--not
for your personal benefit or privacy.

+ Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching
or monitoring of the content of privileged communications, or work product, related to personal representation
or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work
product are private and confidential. See User Agreement for details.
----------------------------------------------------------------------------------------------

So, now, after implementing the new modified ansible playbook again, if someone tries to establish a SSH connection to our concerned nodes, they will be welcomed with a warning banner like this.

warning bannerWarning_Banner

I found this small approach towards SSH security, very interesting, thus, writing it down here.

That’s all for this short post. Hope it helps!

Till next time, o/

by priyankasaggu119 at August 13, 2019 11:24 PM

Bhavin Gandhi

Organizing PythonPune Meetups

One thing I like most about meetups is, you get to meet new people. Talking with people, sharing what they are doing helps a lot to gain more knowledge. It is also a good platform to make connections with people having similar area of interests. I have been attending PythonPune meetup since last 2 years. In this blog post, I will be sharing some history about this group and how I got involved in organizing meetups.

by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at August 13, 2019 03:23 PM

August 12, 2019

Jason Braganza (Personal)

On Life and its Costs


“However mean your life is, meet it and live it;
do not shun it and call it hard names.
It is not so bad as you are.
It looks poorest when you are richest.
The fault-finder will find faults even in paradise.

Love your life, poor as it is.
You may perhaps have some pleasant, thrilling, glorious hours, even in a poorhouse.
The setting sun is reflected from the windows of the almshouse as brightly as from the rich man's abode;
the snow melts before its door as early in the spring.
I do not see but a quiet mind may live as contentedly there, and have as cheering thoughts, as in a palace.”

― Henry David Thoreau, Walden

and

“The cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.”

― Henry David Thoreau, Walden


by Mario Jason Braganza at August 12, 2019 12:15 AM

August 11, 2019

Abhilash Raj

nonlocal statement in Python

Today, while spelunking across Python's documentation, I discovered a new statement which isn't very commonly known or used, nonlocal. Let's see what it is and what it does.

nonlocal statement is pretty old and was introduced by PEP-3014  and allows a variable to re-bind to a scope other than local and global, which is the enclosing scope. What does that mean? Consider this Python function:

def function():
    x = 100
    def incr_print(y):
        print(x + y)
    incr_print(100)

Trying to run this function will give you the expected output:

In [5]: function()          
200

In this case, we see that the inner function, incr_print is able to read the value of x from it's outer scope i.e. function.

Now consider this function instead:

def function():
    x = 100
    def incr(y):
        x = x + y
    incr(100)

It is pretty simple, but when you try to run it, it fails with:

---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
<ipython-input-2-30ca0b4348da> in <module>
----> 1 function()

<ipython-input-1-61421989fe16> in function()
      3     def increment(y):
      4         x = x + y
----> 5     increment(100)
      6 

<ipython-input-1-61421989fe16> in increment(y)
      2     x = 100
      3     def increment(y):
----> 4         x = x + y
      5     increment(100)
      6 

UnboundLocalError: local variable 'x' referenced before assignment

So, you can read from the variable in the outer scope, but you can't write to it because Python won't allow re-binding an object in outer scope. But, there is a way out of this, you must have read about global scope:

In [6]: z = 100                                                               
In [8]: def function(): 
   ...:     global z 
   ...:     z = z + 100 
   ...:                                                                       
In [9]: function()                                                             
In [10]: z
Out[10]: 200

So, you can actually refer to global scope using the global statement. So, to fill in the gap for re-binding to outer scope, a new nonlocal was added:

def function():
    x = 100
    def incr(y):
        nonlocal x
        x = x + y
    incr(100)
    print(x)

When you run this:

In [13]: function()   
200

You can read more details in the documentation and the PEP 3104 itself.

Thanks to Mario Jason Braganza for proof-reading and pointing out typos in this post.

by Abhilash Raj at August 11, 2019 07:21 PM

August 10, 2019

Jason Braganza (Do The Work)

French, Week 14

This week was a slog.
Kept at it.
Specially after reading this.
Can understand phrases, here and there now.
So, that’s progress. :)


by Mario Jason Braganza at August 10, 2019 09:29 AM

Saptak Sengupta

Making cleaner imports with Webpack and Babel

You can bring in modules from different javascript file using require based javascript code, or normal Babel parse-able imports. But the code with these imports often become a little bad because of relative imports like:

import Component from '../../path/to/Component'

But a better, more cleaner way of writing ES6 imports is

import Component from '~/path/from/jsRoot/Component.js'

This hugely avoids the bad relative paths  for importing depending on where the component files are. Now, this is not parse-able by babel itself. But you can parse this by webpack itself using it's resolve attribute. So your webpack should have these two segments of code:

resolve: {
        alias: {
            '~': __dirname + '/path/to/jsRoot',
            modernizr$: path.resolve(__dirname, '.modernizrrc')
        },
        extensions: ['.js', '.jsx'],
        modules: ['node_modules']
    },

and

module: {
        rules: [
            {
                test: /\.jsx?$/,
                use: [
                    {
                        loader: 'babel-loader',
                        query: {
                            presets: [
                                '@babel/preset-react',
                                ['@babel/preset-env', { modules: false }]
                            ],
                        },
                    }
                ],
            },
}

The {modules: false} ensures that babel-preset-env doesn't handle the parsing of the module imports. You can check the following comment in a webpack issue to know more about this.

by SaptakS (noreply@blogger.com) at August 10, 2019 07:00 AM

August 05, 2019

Kushal Das

Adding directory to path in csh on FreeBSD

While I was trying to install rust on a FreeBSD box, I figured that I will have to update the path on the system with directory path of the ~/.cargo/bin. I added the following line in the ~/.cshrc file for the same.

set path = ( $path /home/kdas/.cargo/bin)

I am yet to learn much about csh, but, I can count this as a start.

August 05, 2019 09:52 AM

Jason Braganza (Personal)

Books I’ve Read, July Edition

Lots of fantasy, a lovely book of poetry, a beautifully written nonfiction book.
All this, on July’s list of books :)

July

  • Love Looks Pretty on You, Lang Leav
    (must read. in my imagination, leav is a talented younger sister, who has been through a lot more and writes her advice just for me, in her poems)

  • Working, Robert Caro
    (if you haven’t read the Power Broker, you should
    if you haven’t read the Lyndon volumes, you should
    this book is Caro’s account of the work, that went into those works.
    the ceaseless toil, the thankless years, the people and their stories
    Caro is Caro, master of the craft.
    There are only a few explicit lessons here.
    but plenty if you care enough to read between the lines
    plenty if you make this an annual read, like i will)

  • The Broken Earth Trilogy, N. K. Jemisin
    (if you love fantasy, this is an absolute read.
    world building at its finest.
    The journey she takes me on! The magic she creates! The world she imagines!
    It’s such a harsh world, but gosh darn it, I want to live there.
    Jemisin’s awesome.)

    • The Fifth Season
    • The Obelisk Gate
    • The Stone Sky
  • The Inheritance Trilogy, N. K. Jemisin
    (This was Jemisin’s older trilogy and it shows.
    The language is rougher and the characters drag on a bit
    Minor quibbles though. It was a really good read)

    • The Hundred Thousand Kingdoms
    • The Broken Kingdoms
    • The Kingdom of Gods

P.S. Subscribe to the mailing list to see what I read every month :)


by Mario Jason Braganza at August 05, 2019 12:15 AM

July 29, 2019

Sayan Chowdhury

Force git to use git:// instead of https://

Force git to use git:// instead of https://

I'm an advocate of using SSH authentication and connecting to services like Github, Gitlab, and many others. I do make sure that the use the git:// URL while cloning the repo but sometimes I do make mistake of using the https:// instead. Only to later realise when git prompts me to enter my username to authenticate the SSH connection. This is when I have to manually reset my git remote URL.

Today, I found a cleaner solution to this problem. I can use insteadOf to enforce the connection via SSH.

git config --global url."git@github.com:".insteadOf "https://github.com/"

This creates an entry in your .gitconfig:

[url "git@github.com:"]
	insteadOf = https://github.com/

Photo by Yancy Min on Unsplash

by Sayan Chowdhury at July 29, 2019 06:52 AM

Jason Braganza (Work)

How To Say No to Others, Better!

Last weeks post seemed to have hit a nerve.
Most of you seem to have opened it rather quickly.
And then a few of you, complained! Rather quickly.

“All this is well and good, but I want to say No, to other people!

Well, I can help you with that too!
Eric Barker, of Barking Up the Wrong Tree fame, has an excellent post on how to do just that!

This is how we do it.

1. Notice the no’s: Saying no rarely leads to vendettas or blood feuds. It’s more common and less risky than you think.
People say no to requests all the time and suffer no ill consequences. The sea doesn’t turn to blood and frogs don’t fall from the sky. The requester just shrugs and says, “Okay.”
But you forget those all too easily and train your attention on the 0.02% of the time when the other person blew up and stormed away, never to speak to you again.
So watch your interactions and the interactions of others more closely. Notice all the times “no” doesn’t cause any problems and try to develop a more realistic perspective.


2. Buy time: I’m not sure I can summarize this one right now. I’ll get back to you later.
When you feel pressured for a yes, don’t give the yes — relieve the pressure. Ask for time. This will allow you to calm down and properly evaluate whether you really want to agree or not.
Memorize two of these phrases and make them your default response to any request:

  • “I need to check my calendar; I’ll get back to you.”
  • “Let me check with my husband/wife/partner to see if we’re free that day.”
  • “I’ve got to think about that; I’ll let you know.”
  • “I’ll have to call you back in a few minutes.”

Don’t turn them into questions. They’re statements. And use a pleasant but assertive tone.


3. Have a “policy”: Sorry, but it’s my policy to never summarize the third point.
… suppose a friend asks for a loan you don’t want to extend. Utter the phrase “Sorry, I have a policy about not lending money,” and your refusal immediately sounds less personal. In all kinds of situations, invoking a policy adds weight and seriousness when you need to say no. It implies that you’ve given the matter considerable thought on a previous occasion and learned from experience that what the person is requesting is unwise. It can also convey that you’ve got a prior commitment you can’t break. When you turn down an invitation by saying, “Sorry, I can’t come—it’s our policy to have dinner together as a family every Friday night,” it lets the other person know that your family ritual is carved in stone.


4. Be a “broken record”: I can’t summarize this. I can’t summarize this. I can’t summarize this.
How do you deal with people who don’t take no for an answer?
First thing to do is say you can’t help them.
The second through seven-hundredth thing to do is repeat the first thing.


5. Use a “relational account”: If I summarized this for you I wouldn’t have time to summarize for others.
Your response should take the structure of: “If I helped you, I’d be letting others down.”


6. Make a counteroffer: I can’t summarize this but I can link you to another blog that will.
What if you don’t want to give a flat no? You want to help but can’t commit to the specifics of what they’re asking for. Here’s what to do …
They want you to donate $487,000. Um, no way. But I can give you $10 …
“I’m not qualified to do what you’re asking, but here’s something else.”
“This isn’t in my wheelhouse, but I know someone who might be helpful.”
You can make a counteroffer to almost any request by offering someone a different resource or the name of someone else who might help.

Like my summary of Eric’s summary?
You should go read his post. It has the why, and the how and tons of examples and references!

P.S. And if you’re reading this on my blog, you should subscribe to the newsletter!


by Mario Jason Braganza at July 29, 2019 12:15 AM

July 26, 2019

Robin Schubert

Internet Connection

It's hard to believe that it's been only two years, since so many things in my life changed drastically. I have never before had the feeling of learning such an amount of new things. I found friends that I admire and whose company I enjoy.

I am not connected to the internet - I am connected to people, via the internet

Or as John Perry Barlow stated in the Declaration of the Independence of Cyberspace:

Ours is a world that is both everywhere and nowhere, but it is not where bodies live.

This applies with all positive and negative aspects. However, how much I wish I could come over sometimes, physically, to have a beer and chat or to not talk at all; I am greatful for our new home of minds where we can share our thoughts, free from physical constraints.

We have never met in person - That does not make the meeting less personal

Thank you, #dgplug

P.S.: Looking forward to learn more, share more thoughts, and nevertheless meet in person, eventually ;-)

by Robin Schubert at July 26, 2019 12:00 AM

July 22, 2019

Shakthi Kannan

Aerospike Wireshark Lua plugin workshop, Rootconf 2019, Bengaluru

Rootconf 2019 was held on June 21-22, 2019 at NIMHANS Convention Centre, in Bengaluru on topics ranging from infrastructure security, site reliability engineering, DevOps and distributed systems.

Rootconf 2019 Day 1

Day I

I had proposed a workshop titled “Shooting the trouble down to the Wireshark Lua Plugin” for the event, and it was selected. I have been working on the “Aerospike Wireshark Lua plugin” for dissecting Aerospike protocols, and hence I wanted to share the insights on the same. The plugin source code is released under the AGPLv3 license.

“Wireshark” is a popular Free/Libre and Open Source Software protocol analyzer for analyzing protocols and troubleshooting networks. The “Lua programming language” is useful to extend C projects to allow developers to do scripting. Since Wireshark is written in C, the plugin extension is provided by Lua. Aerospike uses the PAXOS family and custom built protocols for distributed database operations, and the plugin has been quite useful for packet dissection, and solving customer issues.

Rootconf 2019 Day 1

The workshop had both theory and lab exercises. I began with an overview of Lua, Wireshark GUI, and the essential Wireshark Lua interfaces. The Aerospike Info protocol was chosen and exercises were given to dissect the version, type and size fields. I finished the session with real-world examples, future work and references. Around 50 participants attended the workshop, and those who had laptops were able to work on the exercises. The workshop presentation and lab exercises are available in the aerospike-wireshark-plugin/docs/workshop GitHub repository.

I had follow-up discussions with the participants before moving to the main auditorium. “Using pod security policies to harden your Kubernetes cluster” by Suraj Deshmukh was an interesting talk on the level of security that should be employed with containers. After lunch, I started my role as emcee in the main auditorium.

The keynote of the day was by Bernd Erk, the CEO at Netways GmbH, who is also the co-founder of the Icinga project. He gave an excellent talk on “How convenience is killing open standards”. He gave numerous examples on how people are not aware of open standards, and take proprietary systems for granted. This was followed by flash talks from the audience. Jaskaran Narula then spoke on “Securing infrastructure with OpenScap: the automation way”, and also shared a demo of the same.

After the tea break, Shubham Mittal gave a talk on “OSINT for Proactive Defense” in which he shared the Open Source Intelligence (OSINT) tools, techniques and procedures to protect the perimeter security for an organization. The last talk of the day was by Shadab Siddiqui on “Running a successful bug bounty programme in your organization”.

Day II

Anant Shrivastava started the day’s proceedings with a recap on the talks from day one.

The first talk of the day was by Jiten Vaidya, co-founder and CEO at Planetscale who spoke on “OLTP or OLAP: why not both?”. He gave an architectural overview of vitess.io, a Free/Libre and Open Source sharding middleware for running OLTP workloads. The design looked like they were implementing the Kubernetes features on a MySQL cluster. Ratnadeep Debnath then spoke on “Scaling MySQL beyond limits with ProxySQL”.

After the morning break, Brian McKenna gave an excellent talk on “Functional programming and Nix for reproducible, immutable infrastructure”. I have listened to his talks at the Functional Programming conference in Bengaluru, and they have been using Nix in production. The language constructs and cases were well demonstrated with examples. This was followed by yet another excellent talk by Piyush Verma on “Software/Site Reliability of Distributed Systems”. He took a very simple request-response example, and incorporated site reliability features, and showed how complex things are today. All the major issues, pitfalls, and troubles were clearly explained with beautiful illustrations.

Aaditya Talwai presented his talk on “Virtuous Cycles: Enabling SRE via automated feedback loops” after the lunch break. This was followed by Vivek Sridhar’s talk on “Virtual nodes to auto-scale applications on Kubernetes”. Microsoft has been investing heavily on Free/Libre and Open Source, and have been hiring a lot of Python developers as well. Satya Nadella has been bringing in lot of changes, and it will be interesting to see their long-term progress. After Vivek’s talk, we had few slots for flash talks from the audience, and then Deepak Goyal gave his talk on “Kafka streams at scale”.

After the evening beverage break, Øystein Grøvlen, gave an excellent talk on PolarDB - A database architecture for the cloud. They are using it with Alibaba in China to handle petabytes of data. The computing layer and shared storage layers are distinct, and they use RDMA protocol for cluster communication. They still use a single master and multiple read-only replicas. They are exploring parallel query execution for improving performance of analytical queries.

Rootconf 2019 Day 2

Overall, the talks and presentations were very good for 2019. Time management is of utmost importance at Rootconf, and we have been very consistent. I was happy to emcee again for Rootconf!

July 22, 2019 03:00 PM

July 21, 2019

Rahul Jha

The [deceptive] power of visual explanation

Quite recently, I came across Jay Alammar’s, rather beautiful blog post, “A Visual Intro to NumPy & Data Representation”.

Before reading this, whenever I had to think about an array:


In [1]: import numpy as np

In [2]: data = np.array([1, 2, 3])

In [3]: data
Out[3]: array([1, 2, 3])

I used to create a mental picture somewhat like this:


       ┌────┬────┬────┐
data = │  1 │  2 │  3 │
       └────┴────┴────┘

But Jay, on the other hand, uses a vertical stack for representing the same array.

Image from Jay's blog post.

At the first glance, and owing to the beautiful graphics Jay has created, it makes perfect sense.

Now, if you had only seen this image, and I ask you the dimensions of data, what would your answer be?

The mathematician inside you barks (3, 1).

But, to my surprise, this wasn’t the answer:


In [4]: data.shape
Out[4]: (3,)

(3, ) eh? wondering, what would a (3, 1) array look like?


In [5]: data.reshape((3, 1))
Out[5]:
array([[1],
       [2],
       [3]])

Hmm, This begs the question: what is the difference between an array of shape (R, ) and (R, 1). A little bit of research landed me at this answer on StackOverflow. Let’s see:

The best way to think about NumPy arrays is that they consist of two parts, a data buffer which is just a block of raw elements, and a view which describes how to interpret the data buffer.

For example, if we create an array of 12 integers:


>>> a = numpy.arange(12)
>>> a
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])

Then a consists of a data buffer, arranged something like this:


 ┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
 │  0 │  1 │  2 │  3 │  4 │  5 │  6 │  7 │  8 │  9 │ 10 │ 11 │
 └────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘

and a view which describes how to interpret the data:


    >>> a.flags
      C_CONTIGUOUS : True
      F_CONTIGUOUS : True
      OWNDATA : True
      WRITEABLE : True
      ALIGNED : True
      UPDATEIFCOPY : False
    >>> a.dtype
    dtype('int64')
    >>> a.itemsize
    8
    >>> a.strides
    (8,)
    >>> a.shape
    (12,)

Here the shape (12,) means the array is indexed by a single index which runs from 0 to 11. Conceptually, if we label this single index i, the array a looks like this:

i= 0    1    2    3    4    5    6    7    8    9   10   11
  ┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
  │  0 │  1 │  2 │  3 │  4 │  5 │  6 │  7 │  8 │  9 │ 10 │ 11 │
  └────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘

If we reshape an array, this doesn’t change the data buffer. Instead, it creates a new view that describes a different way to interpret the data. So after:

>>> b = a.reshape((3, 4))

the array b has the same data buffer as a, but now it is indexed by two indices which run from 0 to 2 and 0 to 3 respectively. If we label the two indices i and j, the array b looks like this:


i= 0    0    0    0    1    1    1    1    2    2    2    2
j= 0    1    2    3    0    1    2    3    0    1    2    3
┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
│  0 │  1 │  2 │  3 │  4 │  5 │  6 │  7 │  8 │  9 │ 10 │ 11 │
└────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘

So, if were to actually have a (3, 1) matrix, we would have the exact same stack representation as a (3, ) matrix, thus creating the confusion.

So, what about the horizontal representation?

An argument can be made that the horizontal representation can be misinterpreted as a (1, 3) matrix, our brains are so accustomed to seeing it as 1-D array, that it is almost never the case (at least with folks who have worked with Python before).

Of course, it all makes perfect sense now, but it did take me a while to figure out what exactly was going under the hood here.


Visual Explanation of Fourier Series - Decomposition of a square wave into a sum of infinite sinusoids. From this answer on math.stackexchange.com

I also realized that while it is hugely helpful to visualize something when learning about it, but one should always take the visual representation with a grain of salt. As we can see, they are not entirely accurate.

For now, I’m sticking to my prior way of picturing a 1-D array as a horizontal list to avoid the confusion. I shall update the blog if I find anything otherwise.

My point is not that Jay’s drawings are flawed, but how susceptible we are to visual deceptions. In this case, it was relatively easier to figure out, because it was code, which forces one to pay attention to each and every detail, however minor it may be.

After all, human brain, prone to so many biases, taking shortcuts for nearly every decision we make (thus leaving room for sanity) isn’t anywhere near as perfect as it thinks it is.

July 21, 2019 06:30 PM

July 18, 2019

Rahul Jha

My Experience with OBM

If you want an overview about OBM, please read my post on the same .

I’ve participated in three sprints until now, in which I’ve completely failed myself, but I’ve already experiencing a drastic changes in my habits, which is good.

Here is what I’ve learned from this short, but significant experience:

First and foremost, the structure of OBM forces you to formalize things. You need to setup goals for yourself. Even better is that the setup makes it very difficult to be vague. You’ve to setup smaller tasks you need to achieve in order to complete the goal. The research, which is required for listing these tasks (thus, providing you with a big picture), getting a correct estimate of time required, helps you plan efficiently.

The next thing is priority - what do you decide to do now. I tend to perform better, if I’ve only 3 things on my TODO list, rather than 10. And OBM accommodates that - Send a list of all the tasks you want to work for the next 15 days, and then spend time doing them, rather than managing your list.

The difficult thing about writing that blog post you’d been thinking about for a week, or deciphering the math equation which just popped out of nowhere in that paper often isn’t actually writing, or performing an analysis. It’s taking out dedicated time from your time-poor schedule just for this. Once you get started, it’s way easier.

A nice analogy to this argument - the hardest part of going to a gym is actually physically going to the gym. Once you’re there all geared, and warmed up, exercise are much more fun. Getting over this initialization barrier is a must. The way I manage this is having slots in my schedule named “OBM”, where I only complete the tasks I’ve mentioned for OBM. No, you aren’t allowed to browse through twitter during that time - just start grinding, and you shall reap the produce afterwards.

One other important behavior I’ve observed is misalignment between what I believe I’m interested in, and how much I can afford to work towards it. If repeatedly, I find myself not indulging with the task, I know it’s not made for me and quit early (thus saving my time and resources to further go down the drain. More about this in “The Dip” by Seth Godin).

Conclusion

OBM serves as a great tool for introspection, monitoring one’s progress and getting things done. As side ‘effects’, it also gives you a taste of professionalism, punctuality and reporting relationships - a complete package aimed towards self improvement. \o/

But, if you’re overwhelmed with the notion of public accountability, just yet, I’d recommend you to run your own personal OBM and see the difference. If you want anymore advice, feel free to contact me (RJ722 on #dgplug, freenode)

July 18, 2019 06:30 PM

July 13, 2019

Abhilash Raj

Security Implications of User Namespaces

The goal of this blog is to understand and document the security implications of user namespaces. The primary goal it serves is to perform comparison between two processes running on a Linux system as different unprivileged users in the root user namespace and an unprivileged user namespace (i.e. user namespace where uid 0 corresponds to an unprivileged user on the host).

What is user namespace?

I am not going to go into much depths to introduce user-namepaces . There is tons of excellent documentation around it. Here are some of the notable benefits of user namespaces that I can think of:

  1. Allow running users as root with UID 0 but no real privilege to affect the host system of neighboring containers.
  2. Prevent exposure of actual root user to the container

Since we are only talking about security benefits, we won’t go into details about other benefits yet, but there are more reasons than this for you to use user-namspaces. For example, you can create your container images with an assumption that it will always be running as a specific user, say UID 1000, and then that can be remapped to something else at runtime.The support for filesystems to shift file ownerships is currently limited(read: non-existent), so you will end up having to chown the files to shift their UID/GID based on the range mapping you choose. There are are solutions like shiftfs that are currently being evaluated by the community, but I will write up another blog post on this topic.

OK, I am back on the security issue.

What security implications does user namespaces have?

Let’s dive deeper into the two points that I mentioned in the previous section. But before that, let’s go back a bit in history to understand where does the need for user namespace  stems from.

History

Long long time ago, there was one single ruler in the land of Linux kernel, root. This ruler could do anything and everything, which wasn’t really a big concern, except for the fact that there wasn’t a way to give someone access to a limited set of powers without making them all so powerful root.Hence the powers were later broke down into a bunch of capabilities (except the one powerful CAP_SYS_ADMIN who is still the ruler of Linux). These capabilities were meant to both strip powers from root and provide smaller powers to usual citizens(i.e. unprivileged processes) for whatever valid use cases that they might have (for example, apache or nginx doesn’t need to run as root to bind to a privileged port Like 80 or 443, it should just need CAP_NET_BIND_SERVICE. This worked file for most people for some time, until containers showed up. There was a need to perform privileged operations inside of a container, which would only affect the ecosystem of container and not anything outside. So, a new namespace was created that could allow unprivileged processes to pretend to be root inside the confines of their own namespace. This particular namespace was meant to create namespaced (read: fake) users and capabilities.

Namespaced user and capabilities

What does that even mean? It means that UID 99 in the namespace could actually be UID 5000000 on the host. But, the biggest implication is that you can pretend to even have UID 0 inside that namespace but actually being UID 500000 on the host. This is where several interesting possibilities open up.

Similarly, a user with UID 1000 in the host can have CAP_SYS_ADMIN capabilities inside the user namespace and will be able to do things like run mount command to mount certain types of filesystems that they otherwise wouldn’t have been able to do. Note that it still doesn’t provide them with full set of capabilities to affect host filesystem.

Threat models

Consider an application running inside in a container, as an unprivileged user, with all capabilities dropped along with noNewPrivileges enabled, so that their bounding set doesn’t include anything either. Now, there are two possible situations for this container,

  1. one with user-namespaces enabled such that the root user inside the container is mapped to an unprivileged user on the host,
  2. And second when user-namespaces is disabled and that the root user inside the container is mapped to the root user on the host.


The threat model that we are working on is an attacker who is able to compromise an application’s to become root. That is assumed to be possible, because we don’t want to spend time bike shedding all different ways to escalate privileges through application level vulnerabilities. Now, let’s compare the possible attack vectors.

Attack Vectors: No user-namespaces

Without user-namespace when a user escalates to root, they are real root, albeit with no capabilities. This user has access to filesystem and hence can at least read files owned by root user, but they are in a mount namespace. They can read Kernel memory or processes’ memory running as root on the host, if only they weren’t in the PID namespace.

If you look at the patterns here, there is always a single line of defense preventing this user from gaining control of the system.  If the history has taught us anything, Linux has bugs. Any vulnerability that allows this attacker to escape out from either one of the 7 namespaces can allow potentially bad things to happen.

  • Escaping mount namespace could allow them to mount rogue filesystems or read potentially root-owned files
  • Escaping PID namespace would allow them to read other process’ memory running with root privileges
  • Escaping network namespace would allow them to control firewall rules and open potential security holes. They can also control the packets.
  • Escaping UTS namespace: I can’t think of anything too bad on top of network namespace
  • Escaping cgroup namespace would allow them perform DoS on the system
  • Escaping IPC namespace could allow them to send kill signals to critical processes on host

Finally, it is also possible an attacker to re-gain certain privileges that were dropped off. Simply being able to create a user-namespace adds CAP_SYS_ADMIN capability (every first process in a user namespace has CAP_SYS_ADMIN), albeit with limited functionality enabled. Now we are in a very weird territory of what precise privileges does the attacker have. Although, there have been some serious attacks that can be mounted with this limited functionality too.

Attack Vectors: User-namespaces

With user-namespace, configured in a way that root user from within the namespace is mapped to an unprivileged user on the host, when the attacker escalates to root, they are still essentially an unprivileged user. So, even if they escape the container, with all the attack vectors that are mentioned in the previous section, they still can’t do anything that is privileged. For them to mount an attack on the host, there is still a need to find another privilege escalation vulnerability to escalate to root on the host. Simply put, user namespace adds a strong 2nd layer of security on top, which can be exploited only through bugs in the Kernel and not the application.

The stinky bit

We talked about benefits of user namespaces, but now let’s talk a bit about what’s wrong with user namespaces. The way user-namespaces are currently implemented in Linux (this is mostly anecdotal, I am not a Kernel developer, neither have I read the source code related to user-namespace), users are granted extra capabilities inside a user-namspace that allows them to do things that they aren’t allowed to do on host.

This opens up a whole lot of code path that was previously only accessible to unprivileged users. For example, previously, only the root user could create namespaces. However, an exception was added for user-namespace, so now any unprivileged user can create just a user-namespace. This elevates their privilege inside that namespace which allows them to create any other namespace. This is how a rootless container works. This 2-step process has been simplified to make a 1-step process, so when CLONE_NEWUSER in included in a clone(2)  or unshare(2) call, you can also specify CLONE_NEW* for any namespace.

Conclusion

The bugs are being ironed out as they are being found, which is rather quickly due to high demand of container images to be used throughout the industry. People would like them to be secure along with functional. But whether user-namespace prevents more attacks than it enables is a difficult question to answer right now because there are some very serious pros and cons.

However, all vulnerabilities aren’t created equal and I believe the attacks it prevents to due to vulnerable software allowing privilege escalation to be of more value than fear of zero day against user namespaces.

by Abhilash Raj at July 13, 2019 10:09 AM

July 08, 2019

Robin Schubert

Book review: Shakthi Kannan - I want to do project. Tell me wat to do.

I want to do project.

I remember when I graduated from the University, I knew some Physics and how to do some coding or how to write a thesis. What I didn't know was how to collaborate and work in a team, or how to address my issues in mailing lists and IRC channels appropriately. Actually I didn't even know that I didn't know.

I've written about my experience with the #dgplug summertraining earlier. Very slowly I started to discover that I'm lacking the very basics of communication, coding style and organization. I knew some programming but I did not have the means to let anyone else but me benefit from that.

Shakthi Kannan, the author of this book, is one of the mentors in the dgplug summertraining and I owe him much for the valuable lessons learnt.

Rules & Tools to guide you

This book will not teach you coding, but it can tell you how to code. It teaches the rules and tools you need to know, to contribute to Free and Open Source Software and to become part of a world wide community who's efforts power most of today's internet and devices.

It is full of habits and styleguides you should adopt, manners and means to endure, that you need to know to start off with valuable contributions and to avoid frustration early on.

The pure basics

While the content of this book is something that most graduated students are not aware of, these are the pure basics which you need to know and follow for FOSS contribution. If you ever wrote an email to a mailing list or posted a question to an IRC channel and wondered why you either received no helpful responses or none at all, you probably know what this is about.

It contains many real world examples of miserable communication or disadvantageous code formatting that FOSS developers don't have the time to deal with every day. You will have a much better experience if you know the pitfalls to look out for beforehand.

There are chapters that walk you through the steps of triaging a bug, fixing it and making the fix available to be merged; the standard workflows for FOSS contribution.

If you think you want to contribute but you don't know where to start: Read this book!

by Robin Schubert at July 08, 2019 12:00 AM

June 20, 2019

Bhavin Gandhi

infracloud.io: Introducing : Tracing Cassandra with Jaeger

This is the blog post about a plugin for Cassandra, which I wrote few days back. It covers basic information about three pillars of observability, which are logging, metrics and tracing. Thanks to Sameer, who helped me with my doubts related to Java and Maven. The blog post was published at infracloud.io on 19th June, 2019. Introducing : Tracing Cassandra with Jaeger

by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at June 20, 2019 04:07 PM

June 02, 2019

Shakthi Kannan

Building Erlang/OTP sources with Ansible

[Published in Open Source For You (OSFY) magazine, September 2017 edition.]

Introduction

Erlang is a programming language designed by Ericsson primarily for soft real-time systems. The Open Telecom Platform (OTP) consists of libraries, applications and tools to be used with Erlang to implement services that require high availability. In this article, we will create a test Virtual Machine (VM) to compile, build, and test Erlang/OTP from its source code. This allows you to create VMs with different Erlang release versions for testing.

The Erlang programming language was developed by Joe Armstrong, Robert Virding and Mike Williams in 1986 and released as free and open source software in 1998. It was initially designed to work with telecom switches, but is widely used today in large scale, distributed systems. Erlang is a concurrent and functional programming language, and is released under the Apache License 2.0.

Setup

A CentOS 6.8 Virtual Machine (VM) running on KVM will be used for the installation. Internet access should be available from the guest machine. The VM should have at least 2 GB of RAM alloted to build the Erlang/OTP documentation. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.3.0.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/erlang.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

erlang ansible_host=192.168.122.150 ansible_connection=ssh ansible_user=bravo ansible_password=password

An entry for the erlang host is also added to the /etc/hosts file as indicated below:

192.168.122.150 erlang

A ‘bravo’ user account is created on the test VM, and is added to the ‘wheel’ group. The /etc/sudoers file also has the following line uncommented, so that the ‘bravo’ user will be able to execute sudo commands:

## Allows people in group wheel to run all commands
%wheel	ALL=(ALL)	ALL

We can obtain the Erlang/OTP sources from a stable tarball, or clone the Git repository. The steps involved in both these cases are discussed below:

Building from the source tarball

The Erlang/OTP stable releases are available at http://www.erlang.org/downloads. The build process is divided into many steps, and we shall go through each one of them. The version of Erlang/OTP can be passed as an argument to the playbook. Its default value is the release 19.0, and is defined in the variable section of the playbook as shown below:

vars:
  ERL_VERSION: "otp_src_{{ version | default('19.0') }}"
  ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
  ERL_TOP: "{{ ERL_DIR }}/{{ ERL_VERSION }}"
  TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

The ERL_DIR variable represents the directory where the tarball will be downloaded, and the ERL_TOP variable refers to the top-level directory location containing the source code. The path to the test directory from where the tests will be invoked is given by the TEST_SERVER_DIR variable.

Erlang/OTP has mandatory and optional package dependencies. Let’s first update the software package repository, and then install the required dependencies as indicated below:

tasks:
  - name: Update the software package repository
    become: true
    yum:
      name: '*'
      update_cache: yes

  - name: Install dependencies
    become: true
    package:
      name: "{{ item }}"
      state: latest
    with_items:
      - wget
      - make
      - gcc
      - perl
      - m4
      - ncurses-devel
      - sed
      - libxslt
      - fop

The Erlang/OTP sources are written using the ‘C’ programming language. The GNU C Compiler (GCC) and GNU Make are used to compile the source code. The ‘libxslt’ and ‘fop’ packages are required to generate the documentation. The build directory is then created, the source tarball is downloaded and it is extracted to the directory mentioned in ERL_DIR.

- name: Create destination directory
  file: path="{{ ERL_DIR }}" state=directory

- name: Download and extract Erlang source tarball
  unarchive:
    src: "http://erlang.org/download/{{ ERL_VERSION }}.tar.gz"
    dest: "{{ ERL_DIR }}"
    remote_src: yes

The ‘configure’ script is available in the sources, and it is used to generate the Makefile based on the installed software. The ‘make’ command will build the binaries from the source code.

- name: Build the project
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - ./configure
    - make
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

After the ‘make’ command finishes, the ‘bin’ folder in the top-level sources directory will contain the Erlang ‘erl’ interpreter. The Makefile also has targets to run tests to verify the built binaries. We are remotely invoking the test execution from Ansible and hence -noshell -noinput are passed as arguments to the Erlang interpreter, as show in the .yaml file.

- name: Prepare tests
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - make release_tests
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

- name: Execute tests
  shell: "cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl -noshell -noinput -s ts install -s ts smoke_test batch -s init stop"

You need to verify that the tests have passed successfully by checking the $ERL_TOP/release/tests/test_server/index.html page in a browser. A screenshot of the test results is shown in Figure 1:

Erlang test results

The built executables, libraries can then be installed on the system using the make install command. By default, the install directory is /usr/local.

- name: Install
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - make install
  become: true
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

The documentation can also be generated and installed as shown below:

- name: Make docs
  shell: "cd {{ ERL_TOP }} && make docs"
  environment:
    ERL_TOP: "{{ ERL_TOP }}"
    FOP_HOME: "{{ ERL_TOP }}/fop"
    FOP_OPTS: "-Xmx2048m"

- name: Install docs
  become: true
  shell: "cd {{ ERL_TOP }} && make install-docs"
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

The total available RAM (2 GB) is specified in the FOP_OPTS environment variable. The complete playbook to download, compile, execute the tests, and also generate the documentation is given below:

---
- name: Setup Erlang build
  hosts: erlang
  gather_facts: true
  tags: [release]

  vars:
    ERL_VERSION: "otp_src_{{ version | default('19.0') }}"
    ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
    ERL_TOP: "{{ ERL_DIR }}/{{ ERL_VERSION }}"
    TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

  tasks:
    - name: Update the software package repository
      become: true
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      become: true
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - wget
        - make
        - gcc
        - perl
        - m4
        - ncurses-devel
        - sed
        - libxslt
        - fop

    - name: Create destination directory
      file: path="{{ ERL_DIR }}" state=directory

    - name: Download and extract Erlang source tarball
      unarchive:
        src: "http://erlang.org/download/{{ ERL_VERSION }}.tar.gz"
        dest: "{{ ERL_DIR }}"
        remote_src: yes

    - name: Build the project
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - ./configure
        - make
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Prepare tests
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - make release_tests
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Execute tests
      shell: "cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl -noshell -noinput -s ts install -s ts smoke_test batch -s init stop"

    - name: Install
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - make install
      become: true
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Make docs
      shell: "cd {{ ERL_TOP }} && make docs"
      environment:
        ERL_TOP: "{{ ERL_TOP }}"
        FOP_HOME: "{{ ERL_TOP }}/fop"
        FOP_OPTS: "-Xmx2048m"

    - name: Install docs
      become: true
      shell: "cd {{ ERL_TOP }} && make install-docs"
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

The playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/erlang.yml -e "version=19.0" --tags "release" -K

Build from Git repository

We can build the Erlang/OTP sources from the Git repository. The complete playbook is given below for reference:

- name: Setup Erlang Git build
  hosts: erlang
  gather_facts: true
  tags: [git]

  vars:
    GIT_VERSION: "otp"
    ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
    ERL_TOP: "{{ ERL_DIR }}/{{ GIT_VERSION }}"
    TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

  tasks:
    - name: Update the software package repository
      become: true
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      become: true
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - wget
        - make
        - gcc
        - perl
        - m4
        - ncurses-devel
        - sed
        - libxslt
        - fop
        - git
        - autoconf

    - name: Create destination directory
      file: path="{{ ERL_DIR }}" state=directory

    - name: Clone the repository
      git:
        repo: "https://github.com/erlang/otp.git"
        dest: "{{ ERL_DIR }}/otp"

    - name: Build the project
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - ./otp_build autoconf
        - ./configure
        - make
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

The ‘git’ and ‘autoconf’ software packages are required for downloading and building the sources from the Git repository. The Ansible Git module is used to clone the remote repository. The source directory provides an otp_build script to create the configure script. You can invoke the above playbook as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/erlang.yml --tags "git" -K

You are encouraged to read the complete installation documentation at: https://github.com/erlang/otp/blob/master/HOWTO/INSTALL.md.

June 02, 2019 07:45 PM

May 28, 2019

Saptak Sengupta

What's a ShadowDOM?

I first came to know about Shadow DOM from my curiosity about how browsers implement tags like <video> with all the controllers or <input> which changes based on the type attribute. We can never see any HTML code for these implementation and yet they are shown. How? That is when I stumbled upon this blog by Dimitri Glazkov which explains beautifully the concept of shadow DOM encapsulation used by browsers to implement such tags.

However, none of the browsers allowed developers to write their own custom shadow DOM (though google chrome had a v0 version implemented). I stumbled upon shadow DOM again while looking at an issue in jQuery to fix. So now, since 2018, most of the browsers have started supporting shadow DOM APIs and hence jQuery needed to implement support for that too.
https://caniuse.com/#feat=shadowdomv1
So, what is this shadow DOM and why do we even use it?

What's a DOM?

W3C specification describes it as "a method of combining multiple DOM trees into one hierarchy and how these trees interact with each other within a document, thus enabling better composition of the DOM".

Now, to understand that, we need to understand what a DOM is. DOM or Document Object Model is a tree-like structure containing the different elements (or tags) and strings of text that are shown by the markup language (like HTML, XML, etc.).

So, let's say we have a HTML code something like this:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Title</title>
</head>
<body>
<div>
<h1>This is header</h1>
<p>This is a
<a href="https://www.saptaks.blog">
link
</a>
</p>
</div>
</body>
</html>

So, visually you can show the DOM structure as something like:




Shadow DOM

Now, shadow DOM allows us to create separate hidden DOM trees that are attached to the elements of a regular DOM tree. This allows you to implement functional encapsulation, i.e. someone parsing the regular DOM tree and applying styling to the regular DOM tree doesn't know or affect the properties and functionalities of the shadow DOM tree. Hence you use the shadow DOM without knowing the intricate details of how the DOM is implemented. This is important, because this follows the basic ideas of Object Oriented Programming.

The shadow DOM tree starts with a shadow root and can then have any regular DOM element attached underneath it.

Let's see an example:

HTML
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Title</title>
</head>
<body>
<div id="shadowHost">
</div>
</body>
</html>

JavaScript
const shadowHost = document.getElementById('shadowHost');
const shadowRoot = shadowHost.attachShadow({mode: 'open'});
shadowRoot.innerHTML = '<h1>Hello Shadow DOM</h1>';

So, this will create a shadow DOM. Visually you can represent this as:


So, as you can see, there are few different parts in a shadow DOM apart from it being just another DOM.
  • Shadow tree: The DOM tree inside the shadow DOM.
  • Shadow boundary: the place where the shadow DOM ends, and the regular DOM begins.
  • Shadow root: The root node of the shadow tree.
  • Shadow host: The regular DOM node that the shadow DOM is attached to.
  • Shadow child: The tree below the shadow root node.
The shadow DOM cannot be attached to few different elements as mentioned in the spec. Some of the reasons are:
  • The different form tags such as <input>, <textarea>, etc or any other html tag for which the browser implements its own Shadow DOM
  • Elements like <img> or <br> or <hr> which are usually self enclosing tags and don't usually contain a child node.
Also if you see in the code there is a "{mode: open}". The mode determines whether you can edit the style of the shadow DOM from outside the shadow DOM or not.

Why do we need Shadow DOM anyways?

There are few different scenarios where you might want to use shadow DOM. The most important functionality of shadow DOM is the implementation of the concept of encapsulation. So basically, someone using the shadow DOM host doesn't need to care the style and implementation of the DOM inside. So few use-cases would be following:
  • The browser implements Shadow DOM for various different tags such as <video>, <audio>, <input>, <select>, etc.
  • You can make your own custom Shadow DOM when you need to create an element that you want to not be modified by the styling of remaining DOM.
  • You can also use Shadow DOM, when you want to create a separation of a particular DOM element from the main DOM element.
So is Shadow DOM very shadow-ey? Well maybe, since it stays hidden from the main DOM traversal. But at the same time it is often very useful because of its encapsulation properties.

by SaptakS (noreply@blogger.com) at May 28, 2019 01:55 PM

April 08, 2019

Subho

Increasing Postgres column name length

This blog is more like a bookmark for me, the solution was scavenged from internet. Recently I have been working on an analytics project where I had to generate pivot transpose tables from the data. Now this is the first time I faced the limitations set on postgres database. Since its a pivot, one of my column would be transposed and used as column names here, this is where things started breaking. Writing to postgres failed with error stating column names are not unique. After some digging I realized Postgres has a column name limitation of 63 bytes and anything more than that will be truncated hence post truncate multiple keys became the same causing this issue.

Next step was to look at the data in my column, it ranged from 20-300 characters long. I checked with redshift and Bigquery they had similar limitations too, 128 bytes. After looking for sometime found a solution, downloaded the postgres source, changed NAMEDATALEN to 301(remember column name length is always NAMEDATALEN – 1) src/include/pg_config_manual.h, followed the steps from postgres docs to compile the source and install and run postgres. This has been tested on Postgres 9.6 as of now and it works.

Next up I faced issues with maximum number columns, my pivot table had 1968 columns and postgres has a limitation of 1600 total columns. According to this answer I looked into the source comments and that looked quite overwhelming 😛 . Also I do not have a control over how many columns will be there post pivot so no matter whatever value i set , in future i might need more columns, so instead I handled the scenario in my application code to split the data across multiple tables and store them.

References:

  1. https://til.hashrocket.com/posts/8f87c65a0a-postgresqls-max-identifier-length-is-63-bytes
  2. https://stackoverflow.com/questions/6307317/how-to-change-postgres-table-field-name-limit
  3. https://www.postgresql.org/docs/9.6/install-short.html
  4. https://dba.stackexchange.com/questions/40137/in-postgresql-is-it-possible-to-change-the-maximum-number-of-columns-a-table-ca

by subho at April 08, 2019 09:25 AM

April 06, 2019

Tosin Damilare James Animashaun

To be Forearmed is to be Help-ready

I felt compelled to write this after my personal experience trying to get help with my code on IRC.

We all love to make the computer do things exactly the way we want, so some of us choose to take the bold step of learning to communicate with the machine. And it is not uncommon to find many of our burgeoning kind go from location to location on the web space trying to get help along the way. We are prompt to ask questions when we sight help.

When you learn to program, you are often encouraged to learn by doing.

The domain of computer programming or software development is a very practical one. Before now, I had carried this very principle everywhere with me -- in fact, preached it -- but hadn't really put it to use.

The thing about learning languages (or technologies) by reading big manuals is that, often times, beginners will approach the process like they would any other literature book. But that is clearly a wrong approach as empirical evidence has shown. You don't read these things to simply stomach them. Instead, you swallow and then post-process. In essence, you ruminate over stuff.


In truth, the only way you can really process what you read is to try things out and see results for yourself.

Weeks ago, while building an app, I visited IRC frequently to ask questions on just about everything that was unclear to me. While this mode of communication and seeking help is encouraged, abuse of it is strongly discouraged. The good guys over on the IRC channels get pissed off when it appears you're boycotting available resources like documentation, preferring to be spoonfed the whole time. (Remember this is not Quora, where the philosophy is for you to ask more and more questions).

This was the sort of thing that happened to me when I began flooding the channels with my persistent querying. Most of the time the IRC folks kept pointing me to the documentation, as workarounds for most of the issues I had were already documented. A lot of things became much clearer when I decided to finally read-the-docs.

What did I learn from that? "Do your own research!" It's so easy to skip this part, but if you make efforts at finding things out for yourself, you'll be surprised at how much you can dig out without having to bug people. However, this does not guarantee that even the few important questions you'll ask may not be met with hostility, but do not let that discourage you. The people who appear to be unwelcoming are doing so only as a way to discourage you from being over-dependent on the channel. Another advantage of finding things for yourself is that, you learn the why and not just the how.

I think it's fair to quote Armin Ronacher here,

"And it's not just asking questions; questioning other people, like what other people do or what you do yourself.

By far, the worst parts in all of my libraries are where I took the design from somewhere else. And it's not because I know better, it's because pretty much everything everybody does at any point in time has some sort of design decision behind it ... that major decision process.

So someone came up with a solution for a particular problem, and he thought about it and then wrote it down. But if you look at someone else's design, you might no longer understand why the decision was made in the first place. And ... in fact, if the original implementation is ten years old, who knows if the design ideas behind it are still entirely correct."


Personally, I like to answer questions and help put people on track. Nonetheless, if the queries got too overwhelming -- especially coming from the same person -- I would eventually lose interest in answering questions.


Let me remind you of some tidbits or etiquettes of IRC:

  • Construct your questions well (concise, well written and straight-to-the-point questions are more likely to attract help)

  • Don't EVER post code in a channel! Pastebin[1] it and share the link in the channel instead. While at it, don't post your entire code (unless you specifically need to). Post only the relevant portion -- the one you have an issue with. The only exception to this is if the snippet of code is considerably short, say one or two lines.

  • Don't be overly respectful. Yes, dont be too respectful -- cut all the 'Sirs'. Only be moderately polite.

  • Ensure you have and use a registered nick. This gives you an identity.

  • This last one is entirely my opinion but it's also based on what I have observed. Don't just be a leech, try to contribute to the community. Answer questions when you can.


So where do you look to before looking to IRC? There are three sources you may read from before turning to internet-relay-chat for help:

  • Read the documentation. * Documentation is the manual the creator or experts of a software product or tool provide their users with. So you want to know the ins and outs of a technology? That's the right place to look.

  • Read blog posts related to your topic-area. Blog posts are often based on people's experiences, so you're likely to find help from there, especially if the writer has faced the same issue. Remember to bookmark the really helpful ones as you go ;).

  • Last and very important. Read the source code!. This is two-fold: First is actually looking into your own code carefully and seeing what syntax or semantic errors you might have made. Secondly, you look into the original code of libraries/frameworks you are using if they are open source, otherwise revert to documentation. With this, you have it all stripped to its bare bones. Point blank! The source code reveals everything you need to know, once you know how to read it.


So why not arm yourself properly before going to post that question. That way, you would not only make it easier to get help [for yourself], but you would be better informed.



  1. Some Pastebin platforms I use:

Note: Because Hastebin heavily depends on Javascript, some people have complained of text-rendering issues possibly arising from browser-compatibility issues with it. So take caution using it. That said, I love its ease-of-use. It supports the use of keyboard shortcuts such as [Ctrl]+[S] to Save.

by Tosin Damilare James Animashaun at April 06, 2019 09:40 PM

March 27, 2019

Sayan Chowdhury

Kubernetes Day India 2019, Bangalore

Kubernetes Day India 2019, Bangalore

I returned to another conference with the jetlag of another conference, FOSSASIA but this conference was something I was very eagerly waiting to attend. Why? Well there are a couple of reasons, but the main reason for which I bought the ticket was to listen to Liz Rice in person.

The other reason was to see and meet the growing community of Kubernetes in Bangalore.

23rd March 2019, I woke up early morning and left along with Sayani, and Saptak to this place quite outside of Bangalore. Going to the venue almost felt like a weekend getaway trip.

Anyways we reached the venue at 8AM, and the Bangalore sun was already killing. CNCF had announced people would be eligible for KubeCon tickets if they came and collected their badges before 8:30AM. This was a nice trick because I could see a huge crowd in-front of me standing and collecting their badges. After collecting the badges, we headed back to the auditorium.

We grabbed some breakfast as soon as we reached the venue. The first talk/keynote did not delay much and started just after the breakfast.

Dan kicked off the event with an introduction to the conference, CNCF and a brief overview of the projects within CNCF.

Liz Rice, took the stage after Dan and talked on permission in Kubernetes. She gave a very nice ELI5 type analogy for permission/rbac in Kubernetes with file permission in Linux.

I moved outside after the talk and was mostly talking with people at the booths. Red Hat had it's booth also, so I spend some time talking to people about Fedora CoreOS and Silverblue

It was very nice to find the growing audience of Kubernetes, and I also happen to know about a couple of more interesting projects the community is building.


Photo by Cameron Venti on Unsplash

by Sayan Chowdhury at March 27, 2019 04:32 PM

February 23, 2019

Farhaan Bukhsh

The Late End Year Review – 2018

I know I am really, really late, but better late than never.
This past year has been really formative for me.

In this short personal retrospective post, I am just going to divide my experience into 3 categories, the good, the bad and the ugly best.

The Bad

  1. My father got really sick and I got really scared by the thought of losing him.
  2. I moved on from the first company I joined, because I was getting a bit stifled and yearned to learn and grow more.
  3. My brother got transferred, so I had to live without family for the first time in my life. I had never lived alone before this.
  4. I was not able to take the 3 month sabbatical, I thought I could.
  5. I couldn’t find a stable home and was on the run from one place to another constantly.

The Good

  1. I learnt how to live alone. I learnt how to find peace while being alone. Because of this, I could also explore more books and more importantly I could spend more time by myself figuring out what kind of person I want to become.
  2. I got a job with Clootrack, where people are amazing to work with and there is so much to learn.
  3. I found the chutzpah to quit my job, even thought I didn’t have a back up. In roundabout way, it gave me the strength to take risks in life and the courage to handle its consequences.
  4. Bad times help you discover good friends. I am not trying to boast about it, (but you are 😝– ed) but I am thankful to God that I have an overwhelming number of good friends.
  5. I got asked out in a coffee shop! This has never happened to me before. (BUT YES! THIS HAPPENED!).
  6. I wrote few poems this year, all of them heartfelt.
  7. I gained a measure of financial independence and the experience of how to handle things when everything is going south.
  8. I finally wrote a project and released it. I was fortunate enough to get few contributors.
  9. I am more aware now, and have stopped taking people and time for granted.
  10. Started Dosa Culture.
  11. Applied to more conferences to give a talk.

The Best

  1. I read more this year and got to learn about a lot of things from Feynman to Krebs. I explored fiction, non fiction, self help, and humour.
  2. I went to Varanasi (home) more than I ever did in the last five years of my life. I spent lots of time with my parents. I am planning to do it more.
  3. Went on a holiday to Pondicherry. I went for a holiday for the first time, with the money I saved up for the trip. I saw the sunrise of 1st January sitting on Rock beach.
  4. Got rejected at all the conferences I applied for. No matter. It motivates me even more, to try harder, to dance on the edge, to learn more, do more. It helps me strive for greatness, while also being a good reality check.
  5. Spent more time on working on hobby projects and contributing to open source.
  6. Got a chance to be a visiting faculty, and teach programming in college.
  7. Lived more! Learnt More! Loved More!

I feel I might be missing quite a few things in the lists, but these are the few, that helped me grow as a person. They impacted me deeply and changed my way of looking at life.

I hope the coming year brings better experiences and more learning!

Until then,
Live Long and Prosper! (so cheesy – ed)

by fardroid23 at February 23, 2019 03:13 PM

February 05, 2019

Anwesha Das

Have a safer internet

Today, 5th February is the safer internet day. The primary aim of this day is to advance the safe and positive use of digital technology for children and young people. Moreover, it promotes the conversation over this issue. So let us discuss a few ideas.The digital medium is the place where we live our today. It has become our world. However, as compare to the physical world to this world and its rules are unfamiliar to us. Also, adding to that with the advent of social media we are putting our lives, every detail of it in and at the domain of social media. We are then letting governments, industrial lords, political parties, snoops, and the society to judge, to see and monitor us. We, the fragile, vulnerable us, do not have any other option but to watch our freedom, privacy vanishing.

Do we not have anything to save ourselves? Ta Da! Here are some basic ideas are the following which you can try to follow in your everyday life to keep yourself safe in the digital world.

Use unique passphrases

Use passphrases instead of passwords.Passwords are easy to break as well as easy to copy so instead of using “Frank” (a name) or “Achinpur60”(a part of your address), use passphrases like “DiscountDangerDumpster”. It is easy to remember and hard to break. You can assemble 2 more languages (it is easy for us, Indians, right?). I used diceware to generate that password. Moreover, by unique what I mean is that do not use the SAME PASSWORD EVERYWHERE. I can feel how difficult, tedious, impossible it is for you to remember all the lengthy, difficult passphrases (now not passwords remember!) for all your accounts. However, nothing can be done with this. If someone can get your passphrase for one account, he will be able to all of them. Unique passphrases help a lot in this case.

Use password managers

To solve your above-mentioned problem of remembering long passphrases you have a magic thing called password manager. Just move your wand (read mouse) once, and you can find your long passphrases safely protected in their safe vaults. There are many different password managers LastPass, KeePassXC, etc. If you want to know more about this, please read it here.

Do not leave your device (computer, phone, etc) unlocked

My 2 year old once had typed some not so kind words (thanks to autocorrect) to my in-laws and the lovely consequence it brought still shivers me. But thankfully so it was not with someone, having the good technical knowledge and not so good intention, who could cause much greater damage if unlucky then irrecoverable damage than this. So please do not leave your device unlocked.

Do not share your password or your device with anyone

The similar kinds of danger, as aforementioned it poses if you share your password with anyone.

Do block your webcam and phone’s camera

It is now a well-known fact that attackers are spying on us through our web cameras. They are deceiving users by installing webcam spyware. Many of us may think “oh we are safe, our device has indicator lights, so we will be knowing when and if there is any video recording happening.” It is very much possible to disable the activity light by changing the configurations and software hacks. So even if there is no light, your video can very well be taken.

Do not ignore security updates

Most of the time when a security update notification pops up in the morning we very smoothly ignore it for our morning dose of news or checking our social media feed. However, that is the most irresponsible thing you want to do in your day. It may be last chance to secure yourself from the future danger. Mainer times the cyber attackers take advantage of your old, outdated software and attack you through it. It may be your old PDF reader, web browser or your operating system. So, this the most primary thing to your digital security lesson 101 is to keep your software up to date.

Acquire some basic knowledge about your machine

I know (trust me I have passed the phase) please acquire some basic knowledge about your machine, eg which version of operating system you are using, the other software on your machine and their version number. If and when they require any updates or not.

Do not download random websites from the internet.

Do not download random websites from the internet they might contain malware, virus. It might not only affect your machine but all the devices in the network. So, please check the website you are downloading from.

The same caution as above goes for this also. Do not click on the random URLs you receive over email or social media sites.

Use two-factor authentication

Two-factor authentication merely is two steps of validation. It adds an extra layer of security in and for your device. In 2FA the user needs to put two passwords instead of one. It is advisable that you have your 2FA installed on your mobile phone, or even better, use a hardware token like Yubikey. So that if someone wants to hack your account, then they have to get hold of both password and the phone.

Use Tor network

Tor Project is the most trusted and proposed project to remain private, to retain your anonymity. Tor is defined as “free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities, and relationships, and state security.” in their website. Have a look at this to know more.

Take proper legal action

If something terrible happens to you online, please visit the local cyber crime department and lodge a formal complaint there. The local police stations do not deal with the matter related to cyber crimes. So you might directly want to go to the appropriate cyber security cell. If you do not have any idea that where is it, what to do there etc. You go to your Local Police Station take their advice, the information you need and then go the cyber security cell.

Learn GPG encryption

It is always suggested to know and learn GPG, Gnu Privacy Guard if you are up to that level of learning technical things. It is a bit difficult, but, surely a very useful tool to keep your privacy secured.

The steps I mentioned above may sound "too much" to maintain. But let us pretend that your house is your device and password is key to enter there. You normally follow all possible way to keep your house keys safe so the same rules apply here also. The rules are nothing but an habit,like getting up in the morning, it seems difficult for first few times but after that it is organic and normal as it can be. So, build the habbit of keeping safe, only using these tools will not offer you the desired results you need.

Hope you have a happy, safe life in the digital world.

by Anwesha Das at February 05, 2019 05:40 PM

November 27, 2018

Anwesha Das

Upgraded my blog to Ghost 2.6

I have been maintaining my blog. It is a self hosted Ghost blog, where I have my theme as Casper, the Ghost default. In the recent past, September 2018, Ghost has updated its version to 2.0. Now it is my time to update mine.

It is always advisable to test it before running it into production server. I maintain a stage instance for the same. I test any and all the changes there before touching the production server. I did the same thing here also.

I have exported Ghost data into a Json file. For the ease to read I have prettified the file. I removed the old database and started the container for the new Ghost. I reimported the data into the new Ghost using the json file.

I had another problem to solve, the theme. I used to have Casper as my theme. But the new look of it, is something I do not like for my blog, which is predominantly a text blog. I was unable to fix the same theme for the new Ghost. Therefore I chose to use Attila as my theme. I did some modifications, uploaded and enabled it for my blog. A huge gratitude to the Ghost community and the developers, it was a real smooth job.

by Anwesha Das at November 27, 2018 02:57 PM

October 29, 2018

Anu Kumari Gupta (ann)

Enjoy octobers with Hacktoberfest

I know what you are going to do this October. Scratching your head already? No, don’t do it because I will be explaining you in details all that you can do to make this october a remarkable one, by participating in Hacktoberfest.

Guessing what is the buzz of Hacktoberfest all around? 🤔

Hacktoberfest is like a festival celebrated by people of open source community, that runs throughout the month. It is the celebration of open source software, and welcomes everyone irrespective of the knowledge they have of open source to participate and make their contribution.

  • Hacktoberfest is open to everyone in our global community!
  • Five quality pull requests must be submitted to public GitHub repositories.
  • You can sign up anytime between October 1 and October 31.

<<<<Oh NO! STOP! Hacktoberfest site defines it all. Enough! Get me to the point.>>>>

Already had enough of the rules and regulations and still wondering what is it all about, why to do and how to get started? Welcome to the right place. This hacktoberfest is centering a lot around open source. What is it? Get your answer.

What is open source?

If you are stuck in the name of open source itself, don’t worry, it’s nothing other than the phrase ‘open source’ mean. Open source refers to the availability of source code of a project, work, software, etc to everyone so that others can see, modify changes to it that can be beneficial to the project, share it, download it for use. The main aim of doing so is to maintain transparency, collaborative participation, the overall development and maintenance of the work and it is highly used for its re-distributive nature. With open source, you can organize events and schedule your plans and host it onto an open source platform as well. And the changes that you make into other’s work is termed as contribution. The contribution do not necessarily have to be the core code. It can be anything you like- designing, organizing, documentation, projects of your liking, etc.

Why should I participate?

The reason you should is you get to learn, grow, and eventually develop skills. When you make your work public, it becomes helpful to you because others analyze your work and give you valuable feedback through comments and letting you know through issues. The kind of work you do makes you recognized among others. By participating in an active contribution, you also find mentors who can guide you through the project, that helps you in the long run.

And did I tell you, you get T-shirts for contributing? Hacktoberfest allows you to win a T-shirt by making at least 5 contributions. Maybe this is motivating enough to start, right? 😛 Time to enter into Open Source World.

How to enter into the open source world?

All you need is “Git” and understanding of how to use it. If you are a beginner and don’t know how to start or have difficulty in starting off, refer this “Hello Git” before moving further. The article shows the basic understanding of Git and how to push your code through Git to make it available to everyone. Understanding is much more essential, so take your time in going through it and understanding the concept. If you are good to go, you are now ready to make contribution to other’s work.

Steps to contribute:

Step 1; You should have a github account.

Refer to the post “Hello Git“, if you have not already. The idea there is the basic understanding of git workflow and creating your first repository (your own piece of work).

Step 2: Choose a project.

I know choosing a project is a bit confusing. It seems overwhelming at first, but trust me once you get the insights of working, you will feel proud of yourself. If you are a beginner, I would recommend you to first understand the process by making small changes like correcting mistakes in a README file or adding your name to the contributors list. As I already mention, not every contributions are into coding. Select whatever you like and you feel that you can make changes, which will improve the current piece of work.

There are numerous beginner friendly as well as cool projects that you will see labelled as hacktoberfest. Pick one of your choice. Once you are done with selecting a project, get into the project and follow the rest.

Step 3: Fork the project.

You will come across several similar posts where they will give instructions to you and what you need to perform to get to the objective, but most important is that you understand what you are doing and why you are doing. Here am I, to explain you, why exactly you need to perform these commands and what does these terms mean.

Fork means to create a copy of someone else’s repository and add it to your own github account. By forking, you are making a copy of the forked project for yourself to make changes into it. The reason why we are doing so, is that you would not might like to make changes to the main repository. The changes you make has to be with you until you finalize it to commit and let the owner of the project know about it.

You must be able to see the fork option somewhere at the top right.

screenshot-from-2018-10-29-22-10-36.png

Do you see the number beside it. These are the number of forks done to this repository. Click on the fork option and you see it forking as:

Screenshot from 2018-10-29 22-45-09

Notice the change in the URL. You will see it is added in your account. Now you have the copy of the project.

Step 4: Clone the repository

What cloning is? It is actually downloading the repository so that you make it available in your desktop to make changes. Now that you have the project in hand, you are ready to amend changes that you feel necessary. It is now on your desktop and you know how to edit with the help of necessary tools and application on your desktop.

“clone or download” written in green button shows you a link and another option to directly download.

If you have git installed on your machine, you can perform commands to clone it as:

git clone "copied url"

copied url is the url shown available to you for copying it.

Step 5: Create a branch.

Branching is like the several directory you have in your computer. Each branch has the different version of the changes you make. It is essential because you will be able to track the changes you made by creating branches.

To perform operation in your machine, all you need is change to the repository directory on your computer.

 cd  <project name>

Now create a branch using the git checkout command:

git checkout -b 

Branch name is the name given by you. It can be any name of your choice, but relatable.

Step 6: Make changes and commit

If you list all the files and subdirectories with the help of ls command, your next step is to find the file or directory in which you have to make the changes and do the necessary changes. For example. if you have to update the README file, you will need an editor to open the file and write onto it. After you are done updating, you are ready for the next step.

Step 7: Push changes

Now you would want these changes to be uploaded to the place from where it came. So, the phrase that is used is that you “push changes”. It is done because after the work i.e., the improvements to the project, you will be willing to let it be known to the owner or the creator of the project.

so to push changes, you perform as follows:

git push origin 

You can reference the URL easily (by default its origin). You can alternatively use any shortname in place of origin, but you have to use the same in the next step as well.

Step 8: Create a pull request

If you go to the repository on Github, you will see information about your updates and beside that you will see “Compare and pull request” option. This is the request made to the creator of the main project to look into your changes and merge it into the main project, if that is something the owner allows and wants to have. The owner of the project sees the changes you make and do the necessary patches as he/she feels right.

And you are done. Congratulations! 🎉

Not only this, you are always welcome to go through the issues list of a project and try to solve the problem, first by commenting and letting everyone know whatever idea you have to  solve the issue and once you are approved of the idea, you make contributions as above. You can make a pull request and reference it to the issue that you solved.

But, But, But… Why don’t you make your own issues on a working project and add a label of Hacktoberfest for others to solve?  You will amazed by the participation. You are the admin of your project. People will create issues and pull requests and you have to review them and merge them to your main project. Try it out!

I  hope you find it useful and you enjoyed doing it.

Happy Learning!

by anuGupta at October 29, 2018 08:20 PM

October 22, 2018

Sanyam Khurana

Event Report - DjangoCon US

If you've already read about my journey to PyCon AU, you're aware that I was working on a Chinese app. I got one more month to work on the Chinese app after PyCon AU, which meant improving my talk to have more things such as passing the locale info in async tasks, switching language in templates, supporting multiple languages in templates etc.

I presented the second version of the talk at DjangoCon US. The very first people I got to see again, as soon as I entered DjangoCon US venue were Russell and Katie from Australia. I was pretty much jet-lagged as my International flight got delayed by 10 hours, but I tried my best to deliver the talk.

Here is the recording of the talk:

You can see the slides of my talk below or by clicking here:

After the conference, we also had a DSF meet and greet, where I met Frank, Rebecca, Jeff, and a few others. Everyone was so encouraging and we had a pretty good discussion around Django communities. I also met Carlton Gibson, who recently became a DSF Fellow and also gave a really good talk at DjangoCon on Your web framework needs you!.

Carol, Jeff, and Carlton encouraged me to start contributing to Django, so I was waiting eagerly for the sprints.

DjangoCon US with Mariatta Wijaya, Carol Willing, Carlton Gibson

Unfortunately, Carlton wasn't there during the sprints, but Andrew Pinkham was kind enough to help me with setting up the codebase. We were unable to run the test suite successfully and tried to debug that, later we agreed to use django-box for setting things up. I contributed few PRs to Django and was also able to address reviews on my CPython patches. During the sprints, I also had a discussion with Rebecca and we listed down some points on how we can lower the barrier for new contributions in Django and bring in more contributors.

I also published a report of my two days sprinting on Twitter:

DjangoCon US contributions report by Sanyam Khurana (CuriousLearner)

I also met Andrew Godwin & James Bennett. If you haven't yet seen the Django in-depth talk by James I highly recommend you to watch that. It gave me a lot of understanding on how things are happening under the hood in Django.

It was a great experience altogether being an attendee, speaker, and volunteer at DjangoCon. It was really a very rewarding journey for me.

There are tons of things we can improve in PyCon India, taking inspiration from conferences like DjangoCon US which I hope to help implement in further editions of the conference.

Here is a group picture of everyone at DjangoCon US. Credits to Bartek for the amazing click.

DjangoCon US group picture

I want to thank all the volunteers, speakers and attendees for an awesome experience and making DjangoCon a lot of fun!

by Sanyam Khurana at October 22, 2018 06:57 AM

September 07, 2018

Farhaan Bukhsh

6 Bags and A Carton

This is not a technical post; this is something that I have been going through, in life right now. A few months ago, when I left my first job (another time, another post 😉 ), I had a plan. I wanted to take few months off and work on my technical knowledge and write amazing software and get a lot of learning out of my little sabbatical.

But I was not able to do that for a few reasons, primo being I had to move homes in Bangalore because my brother got transferred, so the savings that I had set aside wouldn’t be enough. This was not the end. When it rains, it pours apparently. My dad got super sick, he had a growth near his kidney which the doctors diagnosed as cancer. I got really scared with the situation I was going through. The thing about your parents is that no matter how much you fight with them or how much they “control” you; at the end of the day the thought of losing them can scare the hell out of you. For me, they are my biggest support system so I was not scared, I was terrified.

I gave it a really deep thought and took a call. I needed to find a job. The sabbatical could wait. I started applying to companies and talking to people if they needed extra hand at work. One piece of advice – never leave a job unless you have another in hand. Luckily, I had my small pot of gold, savings, so even in this phase I was sustaining myself. Yes, savings are real and you should have a sufficient amount at any given point of your life. This helps you to take the hard decisions and also to think independently (what Jason calls F*ck you money).

It still feels like a nightmare to me. I use to feel that I will wake up and it will all be over. Reality check; it wasn’t a dream so I have to live with it and make efforts to overcome this situation.

Taking up a job for me was important for two reasons,

  1. I have to sustain myself
  2. I need to have a back up in case my dad needs something (I also have super amazing siblings who were doing the same)

I realised one thing about prayer and God; yes, I believe in God, and I don’t know if prayer works but you definitely get the strength to face your problems and the unknown. I use to call my dad regularly asking how he was doing and some days he could not speak all that much and he use to talk in his weak tone. I use to cry. I was in so much pain although it was not physical or visible. And then, I would cry again.

But tough times teach you a lot, it shows you real friends, it shows you the people you care for and as Calvin’s dad would have said, “It build character!”. I have been through bad times before and the thing about time is , “It changes!”. I knew someday this bad time I am going through will change. Either the agony I am going through will reduce or I will get used to it.

So as I was giving interviews within a month of me moving on from my old job, I was offered one at Clootrack. I like the people who interviewed me and I like that ideas they have been working on. But I have seen people change and I have gone through a bad experiences and at no point of time did I want to repeat past mistakes, so I did a thorough background check before I said yes to them. I got a really good response so here I am working with them.

The accommodation problem that I had was my brother was shifting out of his quarters  and I used to live with him. Well, I helped him pack and I still remember the time when I was bidding farewell to him and my sister-in-law. I had tears in my eyes and after my goodbyes, the moment I stepped in the house I could feel the emptiness and I cried the whole night.  I  could stay at the old place for a week, not more. At this point I can’t thank Abhinav enough that he came as  a support I needed. He graciously let me live with him as long as  I wanted to. Apparently he needed help, paying his bills :P.  This bugger would never accept the fact, he helped me. When dad’s condition was getting bad he gave me really solid moral support. I had also shared my situation with Jason, Abraar, Kushal and Sayan. I received a good amount of moral support from each one of them, specially Jason. I use to tell him everything and he would just calm me down and talk me through it.

So when I shifted to Abhinav’s place all I had was 6 bags and a carton. My whole life was 6 bags and a carton. My office was a 2 hour bus ride one way and another 2 hours to come back. But I didn’t have any problems with this arrangement because this was the least of my problems. I literally use to live out of my bags and I wasn’t sure this arrangement would last long. I had some really amazing moments with Abhinav, I enjoyed our ups and downs and those little fights and leg pulling.

Well, my dad is still not in the best of his health, but he is doing better now. I visit my family more frequently now and yes call them regularly with a miss. I realised the value of health after seeing my dad. I went home after a month of joining Clootrack and stayed with him for a whole month and worked remotely, we visited few doctors and they said he is doing better. After coming back I realised I was not getting any time for myself so I shifted to a NestAway near my office. Although I feel I’ve gotten used to the agony, you never know what life has in store for you next.
It feels much better now, though.

I thank God for giving me strength and my friends and family for supporting me in a lot of different ways.

With Courage in my Heart,
And Faith over Head

 

 

by fardroid23 at September 07, 2018 04:12 AM

September 03, 2018

Sanyam Khurana

Event Report - DjangoCon AU & PyCon AU

I was working on a Chinese app for almost 4 months and developing a backend that supports multiple languages. I spent almost daily reading documentation in Chinese and converting it through Google Translate app to integrate third-party APIs. It was painful, yet rewarding in terms of the knowledge that I gained from the Django documentation and several other resources about how to support multiple languages in Django based backends and best practices around it.

While providing multilingual support through Django backend, I realized that every now and then I was hitting a block and then had to read through the documentation and research the web. There were certain caveats that I got around while researching stuff whenever I was stuck and noted them as "gotcha moments" that I decided to cover later in a talk.

I got an opportunity to be at Sydney, Australia for DjangoCon AU and PyCon AU. This was very special, because it was my first International trip, and the first ever time when I was attending a Python conference outside India.

I was excited and anxious at the same time. Excited to meet new people, excited to be at a new place, excited to see how other PyCon takes place and preferably get some good parts about organizing a conference back to India for PyCon India :) I was anxious as it was a solo trip, I was alone with that impostor-syndrome kicking in. "Will I be able to speak?" -- But then I decided that I will share whatever I've learned.

Even before the conference began, I got an opportunity to spend some time with Markus Holtermann (Django core-dev). We roamed around Sydney Opera House and met Ian, Dom, Lilly and later went to Dinner.

PyCon AU with Nick Coghlan, Dom, Lilly, Markus Holtermann, Ian Foote, Andrew Godwin

I'm bad at remembering names! And when I say this, I mean super-bad. But to my astonishment, I was able to remember names for almost everyone whom I had an opportunity to interact with.

I registered as a volunteer for PyConAU which in-turn gave me a lot of perspective on how PyCon AU manages different aspects of logistics, food, video recording, speaker management, volunteer management, etc. There were certain moments when I was like "Oh, we could've done like this in PyCon India! We never thought about this!" and Jack Skinner was really helpful in discussing how they organize different things at PyCon AU.

My talk was on August 24, 2018, and it went pretty well.

You can see the slides of my talk below or by clicking here

Here is the video:

During the sprints, I met my CPython mentor Nick! Nick was the one who helped me in starting with CPython during PyCon Pune sprints.

I never had an opportunity to try my hands on hardware in my life and seeing so many hardware sprinters, I was curious to start playing with some of the hardware.

During the two days of sprint, I was able to fix my CPython patches, land a few PRs to Hypothesis which is a testing tool and play with Tomu to use it as a 2FA device.

Throughout the sprints, I met many people and yet got so much work done which left me with an astonishment. (I really wish I could be that productive daily :) )

Overall, it was a really pleasant experience and I prepared a list of notes of my PyCon AU takeaways which I shared with PyCon India team.

We had a grand 10th-anniversary celebration and first-time ever we had a Jobs board in PyCon India, along with various other things :)

I want to thank all the organizers, volunteers and attendees of PyCon AU for all the efforts to make the conference so welcoming and inclusive for everyone.

-- Your friend from India :)

by Sanyam Khurana at September 03, 2018 06:57 AM

September 02, 2018

Sanjiban Bairagya

Akademy 2018 experience

This year’s Akademy, the annual world summit of KDE, was held in the beautiful city of Vienna, Austria, from 11th to 17th August, 2018. The 7-day event was divided in two parts, with the first 2 days being mostly keynote addresses and different talks by KDE contributors, followed by 5 more days BoFs, and workshops. Just like every other KDE event, this one was also as awesome as it could get.

Welcome party
The evening before the day the conference was scheduled to start, there was a nice welcome party with loads of food and drinks, where I got to catch up with most of the fellow KDE contributors who I hadn’t met for quite some time, and also got to see a lot of new faces as well, talking to whom felt like a breath of fresh air. Overall, it was a warm welcome, and raised everyone’s spirits to get ready for Akademy for the next day.

Conference Day 1
Day 1 was opened by Lydia, our beloved President of KDE. Dan Bielefeld gave the keynote speech, where it was interesting to learn about how free software helps in tackling human rights issues in North Korea. Numerous insightful talks followed it throughout the day. Bhushan’s update on Plasma on mobile devices was interesting, along with David Faure’s talk on how to run KDE softwares without installing them.

Conference Day 2
Day 2 began with the keynote by Claudia Garad where she spoke on how KDE could learn from how Wikimedia faces its hurdles. Aditya Mehra spoke on the visionary Mycroft AI on Plasma. Bhavisha spoke about her contributions to OpenQA. Andreas’ talk on building Automotive ECUs with Yocto was absolutely inspiring. In the Akademy awards, it was great to see deserving individuals being recognised for their amazing contributions.

Social Event
At the end of day 2 of conference, we headed off and met at the nearby Cafe Derwisch – Partycellar to party. Even though there was a looong wait in a queue, it was worth the wait. With loads of food and drinks, and a dance floor, it was the perfect recipe for having fun and socialising. And boy did we make legit use of the dance floor. The party was quite eventful and went way late into the night. It was an evening to remember.

BoFs and Trainings

  • In the KDE-India BoF, we discussed the journey of India in KDE so far, the obstacles we faced previously, the best steps we can take for the next conf.kde.in event, among other topics.
    kde-india-bof.jpeg
  • The Mycroft BoF taught us how to use the AI, how to add new skills, its progress with Plasma mobile, and the obstacles it faces regarding communicating with 3rd party apps.
  • The BoF on the VVAVE Project was particularly interesting to me as it was centred around a music-player app that is similar to the line of software I work on in my current company. We discussed a number of issues, including how to prioritise between online-streaming and playing local files, and how to overcome the technical challenges.
  • The training on documentation provided tips and tricks on writing short, informative and comprehensible documents, followed by a hands-on assignment. This was very helpful.

Also, this year is the first time I was able to be a part of the Annual General Meeting, and it was an interesting experience especially for being able to influence the decisions made inside KDE in such a direct and important way.

At the end of the final day, I (among others) got to draw (read scribble) a colorful message about the principles of KDE on a piece of paper, thanks to Lydia. I’m eagerly waiting to see it uploaded somewhere soon! 🙂

Hikes, trips and picnics
There was a short walking trip organized on Tuesday evening, August 14, where we walked around interesting parts of the city, with magnificent monuments, churches, libraries, museums, palaces, and sculptures flooded all around. The guide was kind enough to explain the historical significance of each building. Clearly a treat for the eyes and knowledgeable as well. The following day, August 15, we went to Kahlenberg, where we enjoyed the amazing view of the beautiful city of Vienna from above. We also went to the top of one of the towers to take a look at it from even higher. On the final evening of the conference, August 17, we went to have a short picnic somewhere overlooking the Danube river. That was fun as well. Or rather sad, as that was the final day of the conference. My phone is filled with amazing pictures, thanks to all these fun initiatives.

Thank you, KDE, for letting me be a part of this amazing event. Keep rocking!

akademy2018-groupphoto

 

by sanjibanbairagya at September 02, 2018 01:50 AM

August 15, 2018

Anu Kumari Gupta (ann)

split() v/s rsplit() & partition() v/s rpartition()

split(), rsplit(), partition(), rpartition() are the functions on strings that are used in Python. Sometimes there are confusion between these. If you feel the same, then I assure you, it will be no longer confusing to you.

Understanding split()

So, what does split() do? As the name suggests, split() splits the given string into parts. split() takes up two arguments – one is the delimiter string(i.e., the token which you wish to use for seperating or splitting into words). The other is the maxsplit value, that is, the maximum split that you wish to have. By default, split() takes up space as delimiter string. The resultant is the list of splited words.

Here is how you use it:

By passing no arguments,

>> s = "Hello people, How are you?"
>>> s.split()
['Hello', 'people,', 'How', 'are', 'you?']

By passing argument with just the delimiter,

>> s = "Hello people, How are you?"
>>> s.split(",")
['Hello people', ' How are you?']

By passing argument with the delimiter and the maxsplit (say 1, which means to allow only one split),

>>> s = "Hello people, How are you?"
>>> s.split('H', 1)
['', 'ello people, How are you?']

If you try passing any number against the maxsplit that is above the maximum splits possible, then it will always return the list of maximum possible seperated words.

Understanding rsplit()

You might have a question – When split() splits the string, why at all we need rsplit() and what is it? The answer is rsplit() does nothing extra than splitting a given string, except of the fact that it starts splitting from the right side. It parses the string from the right side.

Here is how you use it:

By passing no arguments,

>> s = "Hello people, How are you?"
>>> s.split()
['Hello', 'people,', 'How', 'are', 'you?']

By passing argument with just the delimiter,

>>> s = "Hello people, How are you?"
>>> s.split(",")
['Hello people', ' How are you?']

Note- the output remains the same when we don’t pass any arguments or when we just provide the delimiter.

However, if we pass the arguments with maxsplit as below, you will see the difference:

>> s = "Hello people, How are you?"
>>> s.rsplit('H', 1)
['Hello people, ', 'ow are you?']

Observe, now the split took place on the right occurrence of delimiter.

Understanding partition()

Understood split() and rsplit(). But what is partiton()? The answer is – partition() just splits the string into two parts, given the delimiter. It splits exactly into two parts (left part and right part of the specified delimiter). The output returns a tuple of the left part, the delimiter, and the right part.

Here is how you use it:

>> s = "I love Python because it is fun"
>>> s.partition("love")
('I ', 'love', ' Python because it is fun')

Note: There is no default argument. You have to pass an argument mandatorily otherwise it throws an error.

Understanding rpartition()

It should be intuitive to use, by know, the working of rpartition(). rpartition() just like rsplit() does the partition from right side. It parses the string from the right side and when a delimiter is found, it partition the string into two parts and give back the tuple to you as is the case for partition().

Here is how you use it:

>> s = "Imagining a sentence is so difficult, isn't it?"
>>> s.rpartition("is")
('Imagining a sentence is so difficult, ', 'is', "n't it?")

Notice the last occurrence of “is” in the above given string.

 

 

Hope this helps in understanding the working of these functions!

Happy Coding.

by anuGupta at August 15, 2018 07:37 PM