Planet dgplug

June 22, 2018

Jason Braganza (Work)

Aaron Swartz’s Pinboard Profile

One of the assignments in the DGPLUG summer training is to watch “The Internet’s Own Boy”.



I keep telling the youth in the channel, I lived through Aaron Swartz’s life and followed him throughout.
And then I began to wonder how I did it.
I’ve never followed him personally; not his blog, or any mailing list.
Early on, I only knew him, through Google and Dave Winer & John Gruber’s posts.
And then it struck me today.

Aaron’s Pinboard profile!

Going all the way back to late 2004 (I don’t know when he imported it all), it’s a goldmine of information into how Aaron’s mind ticked. What he liked, the political discourse that influenced him, movies and books that appealed to him, people that he followed.

It’s all there.

I think I actually stumbled on to Maciej via Aaron and then followed him too, as he started Pinboard and went on his way to build his Bookmark Empire.
And in the early halcyon days of whom to follow, whom to read, I binge read Aaron’s profile in 2009-10 and then kept binging again on and off through the years.

And that’s how I know Aaron.
Go read. It’s a wonderful complement to the video.

by Mario Jason Braganza at June 22, 2018 04:35 AM

June 16, 2018

Jason Braganza (Work)

A Transcript of Seth Godin’s Akimbo Episode on Blogging

This episode (Season 2, Episode 1) on blogging is very important to me. I think it’s a distillation of all of Seth’s thoughts about writing and blogging in a crisp, crackling 20 minute episode.

I wanted this for permanant reference, so I thought, I’d transcribe it for myself, and then I thought, well, if it helps me, it’ll surely help others.

So here you go.
It’s all Seth below …
P.S. Typos and errors, omissions and emphases, entirely mine.



At the end of May, 2018, I moved my blog.
I moved it to a new platform.
This is a little bit like moving to a new house, except it takes longer, it’s more emotional, and it’s a lot more expensive.

Hey, it’s Seth. And this, is Akimbo.

Moving my blog is a metaphor for a lot of things; about being found, about how ideas spread, and about the passage of time, in a fast moving world where the culture is driven by the Internet, and the Internet … is driven by the culture.
The last time I moved my blog, was 16 years ago, when George W Bush was president, when Alicia Keys had her debut album and when the bestseller list had names on it, like James Patterson and Stephen King.
So some things change; some things … not so much.

Before I had my blog on Typepad, I used to deliver it by email and I’d been doing that since the 1990s.
What I discovered then, what I wrote a book about, is the simple idea that:

Anticipated, personal and relevant messages are more likely to resonate with people, than spam.

Day by day, week by week, I built up a list of people, who wanted to hear from me, who wanted to get an email newsletter from me, back in the day, when newsletters actually had stamps on them.
I understood, from seeing the work of Esther Dyson that an effective newsletter could completely change the game.
The goal isn’t to reach a lot of people, the goal is to reach the right people, and to reach them in a way, where they are glad you showed up.
Anticipated, personal and relevant, means, that they would miss you if you didn’t show up; that’s my definition of permission.
Earning the privilege … you can’t take it, it’s not a right; there’s no such thing as free speech—when we’re not talking about the government. This is earned speech.
Showing up with a message that people want to get, drip by drip, day after day.

At the time, my emails didn’t come out everyday; that would have overwhelmed most people in the information starved 1990s, atleast compared to today.
When I began it was twenty people, getting a newsletter about the struggling entrepreneur they knew.
Then it was twenty-five.
Then it was forty.
Then, it was a hundred.

This led to my second big insight:

Ideas that spread, win.

In the late 90s this magazine showed up, Fast Company.
I decided, Fast Company was the greatest magazine ever published.
I also decided that my life dream, was to be a columnist, for Fast Company magazine.
I figured I had something to say, and I figured that was the place to say it.
I sent a note to Bill and Alan, the founders, the editors and I told them, I wanted to write a column for them. I offered to write it for free. Alan wrote back a really nice note saying, “We’d love to, except, we don’t run columns.”
Well that didn’t deter me, so I started writing a column for Fast Company, even though they didn’t have columns.
Every week, I sent Alan & Bill, a new column and by the time, I got to, I guess, the eighth or ninth week, their ad sales had started to go up and they realised they needed more editorial to sit next to all of those ads. So they said, “Sure, Seth. If you want to write for us, if you want to write thousands and thousands of words for us, for free, we’d be delighted to run your column.”

I was thrilled. I made a decision then and there, that the goal of the column was to write some thing that people would xerox, in an old fashioned Xerox machine and put one in the slot of every person in the mailroom, in the old fashioned mailroom, the old fashioned slot, the whole idea that there was an office to begin with. So instead of people forwarding it by email, (which was unheard of), people were actually copying a column and putting it in other people’s mailboxes.

Ideas that spread, win.

Month after month, my column grew in its impact. Fast Company grew, in its impact. And that lesson has not been forgotten. That what we can do, is serve a small group of people, with an idea that they want to share.
Why do they want to share it?
In case of my Fast Company column, the reason was this:
I was telling them something, they already believed, they already wanted to be true, they already wanted to share.
My job, was to write it in a way, that made it cogent and easy to share.
And so they did. And if that resonated with some other people, they joined in.

Few years later, I was at a conference. I met Joi Ito there. He was on the board of a company called Six Apart. I saw for for the very first time, what the Typepad platform looked like and I moved my blog to it, a few weeks later.
Here’s the thing.
Google changed everything.
They changed everything in a way that most people dont see or understand. Most people use Google to find things. They assume it is telepathetic. They assume, that all they have to do, is type in a few simple words, and Google will comb through the entire Internet, and find them exactly what they’re loooking for. Most people never get past the first page of results on Google.
Well it’s so popular, so many billions of searches are done, that everybody, who makes a thing, who has a service, who wants a job, who needs or wants to be found … wants to be found by Google.
There is a haystack, the biggest haystack in the history of the world, and each of us, each of us who wants to make a difference, who wants to be found—we’re needles.

And so there’s a problem. There’s a challenge.

And the challenge is, getting found for a generic term. Right, if I search for “John Jacob Jingleheimer Schmidt” and Google does its job right, I will actually find John. But most of the time, that’s not what people search for.
Most of the time, people aren’t sure what they’re looking for and they want Google to find it for them. And so, in every town there are a thousand plumbers, with sharp elbows, hoping that they will be the first match for the word, plumber. And in every industry there are consultants or there are freelancers or there are companies, big or small, waiting to be found.

At the beginning, Google’s algorithm was pretty primitive. It wasn’t particularly difficult to cheat your way to the front of the line, to play in ways that the Google algorithm liked a lot and get more than your fair share of visits, from the hordes of people, searching, for the likes of you.

And so we began this striation, this sedimentary approach; people at the top, people in the middle, people at the bottom, not because they are worthy, not because Google has done a site visit or the Department of Health has verified them, but instead, because they got good at getting found.

This is often called SEO, Search Engine Optimisation.
It’s a weird term. Optimising who? Optimising what?
Well, the search engine is Google and what we are optimising is the way, our website looks and feels and is seen by the other people in the world, so that Google will pick us.
And it’s led to all sorts of weird side effects. People twisting themselves into knots, not seeking to serve the customer, but seeking to serve some sort of mythical wizard, inside the box, that calls itself Google.

So what does this have to do with blogs?

I’ll get to that in just a second.
At the beginning, the genius of Google’s algorithm was this—they didn’t rank pages, based on what was on the page. They ranked pages, based on what people who linked to the page were saying. So if a lot of people linked to a page, saying this is the best hotel in all of Ghana, then if you search for “best hotel in Ghana”, the Google algorithm should have found, the page they were all pointing to.

If you wanted to be found then, the idea of writing blog posts that were often shared and spread, made an awful lot of sense. Because instead of just one website that just sat there all day, everyday the same, you were writing a blog.
A blog about this and a blog about that.
So for eight years, if you typed the word blog into Google, my blog, was the very first match.
Now it’s important for me to state, that I didn’t write the blog so this would be true. I wrote the blog, because after I left Fast Company, I wanted that same experience.
The experience of people, xeroxing my posts and putting them into the office mailboxes. Of course, no xeroxing, just email.

So I was writing for the very reason, that people were linking. I was writing so that people, would spread the ideas.
No ads on my blog. Rare calls to action.
That’s not what it’s for.
What it’s for, is to teach people, to show them something.

But, back to this challenge and how the culture changed.

Because what happened was, people who didn’t belong, for whatever reason you want to measure, number one in anything, decided that being number one was so valuable, they would spend their time and their money working to be number one as opposed to working to be better, working to serve people, better.

And thus, SEO developed a bad reputation.
The idea was that for a thousand or five thousand or fifty thousand or a hundred thousand dollars, you could game the system, so that you would get more links than you would (quote) “deserved”, that you’re exposing many of the failings of the Google algorithm, because human beings weren’t actually looking at your site.

Once it became worth millions of dollars, tens of millions of dollars to be ranked number one in Google, an arms race began. That arms race brought in good operators, bad operators, people who were playing for the right reasons and people who weren’t.

But what we saw in all of the hotly contested areas, things like hotels or travel or things that we might buy, or services or obscure terms, were people who were subverting the very idea behind the search engine.

It was then, that Google made an interesting choice.
I’m not sure what I would have done in their shoes.
But it was a two part choice. They didn’t really understand how the algorithm began to work, because it got too complicated. More than three thousand people, were hand tweaking, the way Google was scoring pages. So Google pretended,

a. They knew exactly how it worked and
b. that no human beings were actually making these decisions. It was simply the algorithm. “Well it’s not our fault you moved down, it’s the algorithm”. “Oh, it’s not our fault, that this hate term is number one in results. It’s the algorithm.”

And somehow, though all of this is nonsense, it also undermines their responsibity, because once they are the middleman, the monopoly on how peple find stuff, they do have a responsibilty to keep their promise and give the best possible results.

But we’re here to talk about the change that each of us can make, and I think the key insight is this

You cannot trust, that your needle is going to get found in the haystack.

You cannot trust that, any generic word, the word you seek to own, butcher shop, shoe store; pick whichever one you want, is going to end up with you on top.
And if you’re not on top, if you’re number twenty, or number fifty or number hundred, you might as well be invisible.

The alternative, is to win when someone searches for you.

So, if you look for Seth, you’ll find me.
If you look for Seth Godin, you’ll definitely find me.
If you look for Newton Running Shoes, you’ll find the people that make Newton Running Shoes.
So the game goes from, “How do I persuade Google to find me, when someone is looking for the generic?” to “How do I persuade the public to look for the specific?”

And so, as we enter this post Google age, where clearly there’s room for more than one winner for every noun, how do we have a chance to change the culture?

And the answer is this.
The answer is,

Change the people you engage with, so much, that they want to tell other people.

Have them want to tell other people in the specific, not in the general.

You may have heard me talk about one of my favourite examples: The Poilâne Bakery in Paris.
Run by Apollonia Poilâne, the daughter of the late Lionel.
This bakery is extraordinary. Lines out the door. A premium product enjoyed at most of the fine restautants in Paris.
If you search for bakery, you will not find it. Not easily.
But if you search for Poilane, there it is, right up top, where it belongs.

So that’s the mission.
The mission is to write things, create things, post things, engage with things … that people choose to share.
To earn the permission of people they share them with.
The permission to follow up, the permission to teach, the permission to engage,
And then share some more, and then teach some more, and do it in a way that people will share it again.
And then people will share it again.
And then people will share it again.
Each time, earning you more permission.

Because, trusting the middleman on the Internet; that’s a dangerous game.
That if you are building your content on Linkedin or building your content on Facebook, you’re sharecropping.
That you’re working for a landlord who does not care about you.
That has no contractual obligation to keep their word.
That at anytime they can say, “Oh! You know all those people you have permission to talk to? Your followers? your friends? We’re going to start charging you money to reach them. Wanna boost this post?” That’s a lousy deal.
That what we have to figure out how to do, is engage with a platform that has an obligation to us.

That was my relationship with Typepad.
And I’m super grateful, that in the sixteen years I was on their platform, they kept their end of the bargain.
I happily paid them, whatever it was, $20 a month, because that money was repaid to me again and again, by a third party that had my interests at heart, a site that was up almost all the time.
It worked. Because I was paying for it.
I wasn’t the product.
I was the customer.
And my job, on that platform, was to be a teacher.
My job, on that platform, was to teach, was to make it easy for people to find me, if they were looking for me.
Not looking for the generic, but looking for the specific.
And then, to earn their attention and trust, and to keep a promise.

I discovered that five to ten years ago, I was blogging three times a day.
I was sort of insane. I don’t know what kind of caffeine I was drinking.
I realised that my promise was out of hand.
So I made a specific promise.
I said “Once a day. That’s it. I’m not going to overwhelm you. Once a day, I’ll be here.”

And I’ve been there everyday, since then.
Partly, because I have something to teach, partly because I have something to say, partly because I have something to share.
But also, because I made a promise.

Anticpated, personal and relevant messages to people who want to get them.

Drip by drip.
Day by day.

So I’m grateful to all the people, who worked so hard, to help me build this new platform for my blog.

But waaaaay more important than that, I’m encouraging each one of you to have one.
Not to have a blog to make money. Because you probably won’t.
Not to have a blog because you’ll have millions and millions of readers. Because you probably won’t.
But to have a blog because of the discipline it gives you, to know, that you’re going to write something tomorrow, something that might not be read by many people, it doesn’t matter.
It’ll be read by you.
And that if you can build that up, ten at a time, twenty at a time, a month at a time, day by day, you will begin to think more clearly.
You’ll make predictions.
You’ll make assertions.
You’ll make connections.

And there, they will be, in type, for you to look at a month or a year later.
This practice, of sharing your ideas, to people who will then choose, or not choose to share them, helps us get out of our own head, because it’s no longer the narrative inside, it’s the narrative outside.
The narrative that you’ve typed up, that you’ve cared enough to share.

So SEO’s fine. If you win at SEO, Congratulations!
I’ll send you a postcard, maybe a medal and a ribbon.
It’s great.
Someone needs to win at every single noun, anyone could search on.
But it might not be you. It probably won’t be you. The odds are against it being you.
A twelve year old, probably should not grow up saying “I will not be happy, unless I am the champion of the world at this sport or that thing.”
Because the odds are too long.
It’s not worth betting your happiness on that.
That if we’re going to change the culture, we’re going to have to figure out how to bypass the generic Google search, and instead reach a few, the smallest viable audience, the group of people we seek to serve, to connect those people with each other and with our ideas in such a way that we become the specific, not the generic.

Because if you’re specific enough and generous enough and consistent enough, it’s worth the journey.

Thanks for tuning in to Akimbo.
I hope that you will subscribe and tell your friends.

by Mario Jason Braganza at June 16, 2018 06:30 PM

June 11, 2018

Farhaan Bukhsh

Home Theatre!

Due to a lot of turmoils in my life in the recent past, I had to shift with a friend. Abhinav has been an old friend and college mate, we have hacked on a lot of software and hardware projects together but this one is on of the coolest hack of all time and since we are flatmates now it solved a lot of issues. We also had his brother Abhishek so the hack became more fun.

The whole idea began with the thoughts of making the old laptops which we have to be used as servers, we just thought what can we do to make the best of the machines we have. He has already done few set ups but then we landed up on doing a htpc, it stands for Home Theatre PC or media centre, basically a one stop shop for all the need, movies, tv shows and music. And we came up with a nice arrangement which requires few things, the hardware we have:

  1. Dell Studio 1558
  2. Raspberry Pi 3
  3. And a TV to watch these on 😉

When we started configuring this setup we had a desktop version of Ubuntu 18.04 installed but we figured out that this was slowing down the machine so we switched to Ubuntu Server edition. This was some learning because I have never installed any server version of operating system. I use to wonder always what kind of interface will these versions give. Well without any doubt it just has a command-line utility for every thing, from partition to network connection.

Once the server was installed we just had to turn that server into a machine which can support our needs, basically installed few packages.

We landed up on something called as Atomic Toolkit. A big shoutout for the team to develop this amazing installed which has a ncurses like interface and can run anywhere. Using this toolkit we kind of installed and configured CouchePotato, Emby and Headphones.

Click to view slideshow.

This was more than enough we could automate a lot of things in our life with this kind of set up, from Silicon Valley to Mr. Robot. CouchePotato help us to get the best quality of videos and Emby gives us a nice dashboard to show all the content we have.

I don’t use Headphones much because I love another Music Application but then Headphones being a one stop shop is not wrong too. All this was done on the Dell Studio Machine we had, also we stuck a static IP on it so to know which IP to hit.

Our sever was up, running and configured. Now, we needed a client to listen to this server we kind of have a TV but that TV is not smart enough so we used a Raspberry Pi 3 and attached it to the TV using the HDMI port.

We installed OSMC on the Raspberry Pi and configured it to use Emby and listen to the Emby server once we booted it up it was very straight forward. This made our TV look good and also a little smart and it opened our ways for 1000s of movies, music and podcast. Although I don’t know if setting up this system was more fun or watching those movies will be.

 

by fardroid23 at June 11, 2018 03:03 AM

June 07, 2018

Jason Braganza (Personal)

Daily Writing, 43 – How to Pick a Career

shore-2


I’ve written about the critical distinction between “reasoning from first principles” and “reasoning by analogy”—or what I called being a “chef” vs. being a “cook.”

The idea is that reasoning from first principles is reasoning like a scientist. You take core facts and observations and use them to puzzle together a conclusion, kind of like a chef playing around with raw ingredients to try to make them into something good. By doing this puzzling, a chef eventually writes a new recipe. The other kind of reasoning—reasoning by analogy—happens when you look at the way things are already done and you essentially copy it, with maybe a little personal tweak here and there—kind of like a cook following an already written recipe.

For any particular part of your life that involves reasoning and decision making, wherever you happen to be on the spectrum, your reasoning process can usually be boiled down to fundamentally chef-like or fundamentally cook-like.
Creating vs. copying.
Originality vs. conformity.

Being a chef takes a tremendous amount of time and energy—which makes sense, because you’re not trying to reinvent the wheel, you’re trying to invent it for the first time. Puzzling your way to a conclusion feels like navigating a mysterious forest while blindfolded and always involves a whole lot of failure, in the form of trial and error.
Being a cook is far easier and more straightforward and less icky. In most situations, being a chef is a terrible waste of time, and comes with a high opportunity cost, since time on Earth is immensely scarce.
Throughout my life, I’ve looked around at people who seem kind of like me and I’ve bought a bunch of clothes that look like what they wear. And this makes sense—because clothes aren’t important to me, and they’re not how I choose to express my individuality. So in my case, fashion is a perfect part of life to use a reasoning shortcut and be a cook.

But, when you subtract childhood (~175,000 hours) and the portion of your adult life you’ll spend sleeping, eating, exercising, and otherwise taking care of the human pet you live in, along with errands and general life upkeep (~325,000 hours), you’re left with 250,000 “meaningful adult hours.”
So a typical career will take up somewhere between 20% and 60% of your meaningful adult time—not something to be a cook about.

It’s a regular length Tim Urban article
(This means it’s looong … booklet sized)
And it’s practical.
And humourous.
And absolutely smashing.
And packed with wisdom.
Go, read.

by Mario Jason Braganza at June 07, 2018 06:31 PM

May 30, 2018

Kushal Das

Tor Browser and Selenium

Many of us use Python Selenium to do functional testing of our websites or web applications. We generally test against Firefox and Google Chrome browser on the desktop. But, there is also a lot of people who uses Tor Browser (from Tor Project) to browse the internet and access the web applications.

In this post we will see how can we use the Tor Browser along with Selenium for our testing.

Setting up the environment

First step is to download and verify, and then extract the Tor Browser somewhere in your system. Next, download and extract geckodriver 0.17.0 somewhere in the path. For the current series of Tor Browsers, you will need this particular version of the geckodriver.

We will use pipenv to create the Python virtualenv and also to install the dependencies.

$ mkdir tortests
$ cd tortests
$ pipenv install selenium tbselenium
$ pipenv shell

The tor-browser-selenium is Python library required for Tor Browser Selenium tests.

Example code

import unittest
from time import sleep
from tbselenium.tbdriver import TorBrowserDriver


class TestSite(unittest.TestCase):
    def setUp(self):
        # Point the path to the tor-browser_en-US directory in your system
        tbpath = '/home/kdas/.local/tbb/tor-browser_en-US/'
        self.driver = TorBrowserDriver(tbpath, tbb_logfile_path='test.log')
        self.url = "https://check.torproject.org"

    def tearDown(self):
        # We want the browser to close at the end of each test.
        self.driver.close()

    def test_available(self):
        self.driver.load_url(self.url)
        # Find the element for success
        element = self.driver.find_element_by_class_name('on')
        self.assertEqual(str.strip(element.text),
                         "Congratulations. This browser is configured to use Tor.")
        sleep(2)  # So that we can see the page


if __name__ == '__main__':
    unittest.main()

In the above example, we are connecting to the https://check.torproject.org and making sure that it informs we are connected over Tor. The tbpath variable in the setUp method contains the path to the Tor Browser in my system.

You can find many other examples in the source repository.

Please make sure that you test web application against Tor Browser, having more applications which can run smoothly on top of the Tor Browser will be a great help for the community.

by Kushal Das at May 30, 2018 04:39 AM

May 29, 2018

Kushal Das

PyQt5 thread example

PyQt is the Python binding for Qt library. To write Qt5 code, we use PyQt5 module. Like many others, my first introduction to GUI application development was using PyQt. Back in foss.in 2005 a talk from Sirtaj introduced me to PyQt, and later fall in love with it.

I tried to help in a GUI application after 8 years (I think), a lot of things have changed in between. But, Qt/PyQt still seems to be super helpful when it comes to ease of development. Qt has one of the best documentation out there for any Open Source project.

Many students start developing GUI tools by replacing one of the command line tool they use. Generally the idea is very simple, take some input in the GUI, and then process it (using a subprocess call) on a button click, and then show the output. The subprocess call happens over a simple method, means the whole GUI gets stuck till the function call finishes. We can fix this issue by using a QThread. In the below example, we will just write a frontend for git clone command and then will do the same using QThread.

Setting up project directory

I have used qt creator to create a simple MainWindow form and saved it as mainwindow.ui in the project directory. Then, used pipenv to create a virtualenv and also installed the pyqt5 module. Next, used the pyuic5 command to create a Python file from UI file.

The code does not have error checks, the subprocess documentation should give you enough details about how to add them.

Doing git clone without any thread

The following code creates a temporary directory, and then git clones any given git repository into that.

#!/usr/bin/python3

import sys
import tempfile
import subprocess
from PyQt5 import QtWidgets

from mainwindow import Ui_MainWindow


class ExampleApp(QtWidgets.QMainWindow, Ui_MainWindow):

    def __init__(self, parent=None):
        super(ExampleApp, self).__init__(parent)
        self.setupUi(self)
        # Here we are telling to call git_clone method when
        # someone clicks on the pushButton.
        self.pushButton.clicked.connect(self.git_clone)

    # Here is the actual method which does git clone
    def git_clone(self):
        git_url = self.lineEdit.text()  # Get the git URL
        tmpdir = tempfile.mkdtemp()  # Creates a temporary directory
        cmd = "git clone {0} {1}".format(git_url, tmpdir)
        subprocess.check_output(cmd.split())  # Execute the command
        self.textEdit.setText(tmpdir)  # Show the output to the user


def main():
    app = QtWidgets.QApplication(sys.argv)
    form = ExampleApp()
    form.show()
    app.exec_()


if __name__ == '__main__':
    main()

Doing git clone with a thread

In the below example we added a new CloneThread class, it has a run method, which gets called when the thread starts. At the end of the run, we are emitting a signal to inform the main thread that the git clone operation has finished.

#!/usr/bin/python3

import sys
import tempfile
import subprocess
from PyQt5 import QtWidgets
from PyQt5.QtCore import QThread, pyqtSignal

from mainwindow import Ui_MainWindow


class CloneThread(QThread):
    signal = pyqtSignal('PyQt_PyObject')

    def __init__(self):
        QThread.__init__(self)
        self.git_url = ""

    # run method gets called when we start the thread
    def run(self):
        tmpdir = tempfile.mkdtemp()
        cmd = "git clone {0} {1}".format(self.git_url, tmpdir)
        subprocess.check_output(cmd.split())
        # git clone done, now inform the main thread with the output
        self.signal.emit(tmpdir)


class ExampleApp(QtWidgets.QMainWindow, Ui_MainWindow):

    def __init__(self, parent=None):
        super(ExampleApp, self).__init__(parent)
        self.setupUi(self)
        self.pushButton.setText("Git clone with Thread")
        # Here we are telling to call git_clone method when
        # someone clicks on the pushButton.
        self.pushButton.clicked.connect(self.git_clone)
        self.git_thread = CloneThread()  # This is the thread object
        # Connect the signal from the thread to the finished method
        self.git_thread.signal.connect(self.finished)

    def git_clone(self):
        self.git_thread.git_url = self.lineEdit.text()  # Get the git URL
        self.pushButton.setEnabled(False)  # Disables the pushButton
        self.textEdit.setText("Started git clone operation.")  # Updates the UI
        self.git_thread.start()  # Finally starts the thread

    def finished(self, result):
        self.textEdit.setText("Cloned at {0}".format(result))  # Show the output to the user
        self.pushButton.setEnabled(True)  # Enable the pushButton


def main():
    app = QtWidgets.QApplication(sys.argv)
    form = ExampleApp()
    form.show()
    app.exec_()


if __name__ == '__main__':
    main()

The example looks like the above GIF. You can find the source code here. You can find a bigger example in the journalist_gui of the SecureDrop project.

by Kushal Das at May 29, 2018 02:26 AM

May 26, 2018

Jason Braganza (Doppelgänger)

Writing Day 33 - Akimbo

akimbo_bow_blue_sky.001_bywgg9.png


I’ve been raving to my friends about Akimbo and I’m surprised I haven’t written about it here.
It’s one of the reasons, I’m glad I’m alive at a time like this.
There’s no way without modern technology, I could learn from a master like Seth Godin for free!

Akimbo is Seth’s latest project and it’s podcast on all things Seth.
To me, it’s my real life MBA.
They’re weekly, punchy, 20 minute episodes on a topic.

Last weeks episode about Genius was genius.

This week was all about The Long Term.
Ponzi schemes, Bitcoin ponzi ICOs, Mr. Ponzi himself, Whales, Hippos, Fedex, Olive trees, Starbucks and even Superman make an appearance.

Here are a few scambled notes …

  • Emergencies feel like a matter of life or death

  • Every culture in every corner of the globe has adopted the mindset that tomorrow is too late!

    1. We’re impatient and want a quick return on our effort
    2. We want proof. We’re insecure that our effort will pay off
    3. We want excitement!
    4. All three of these create a ratchet, that quickens things up drastically (and on a personal note, makes things overwhelming)
  • Human beings are really shitty at the long run
    If you want prople to take action, you gotta compress it forward.

  • If you want to change the behaviour of a group of people, make it all about the now. Make it urgent, not important.

    • Make it painful and expensive in the moment, if you want to stop them doing something (e.g. hefty taxes of cigarettes)
    • Make it lucrative and fun when you want them to pay attention.
  • Stuff that matters, Mother Nature, everything actually takes a looooooong time.

  • We need to figure out how to build resilient organisations with a mission that goes out further than a week

    • Our mission statement can’t be about market share
    • But about the work that matters
  • Every one of us is capable of doing it

  • There’s a significant advantage to be willing to take a long time, to inexorably evolve bit by bit, day by day, to deal with the Long Run.
    We are capable of of building organizations and companies like this.
    You could use emergencies to our advantage, create positive ratchets.
    Drip by drip, day by day, we change the culture.

Don’t forget to catch the show notes for each episode. They’re delightful.
And yes, go subscribe!

by Mario Jason Braganza at May 26, 2018 06:31 PM

May 25, 2018

Anwesha Das

How to use Let’s Encrypt with nginx and docker

In my last blog post, I shared the story on how did I set my server up. I mentioned that I’d be writing about getting SSL certificates, so here you go.

When I started working on a remote server somewhere out there in the globe and letting that come in into my private space, (my home machine) I realised I needed to be much more careful, and secure.

The first step to attain security was to set up a firewall to control unwanted incoming intrusions.
The next step was to create a reverse proxy in nginx :

Let us assume we’re running a docker container, a CentOS 7 host, using the latest ghost image. So first, one has to install docker, nginx and start the docker service:

yum install docker nginx epel-release vim -y

Along with docker and nginx we are also installing epel-release from which we will later get Certbot, for the next part of our project and vim if you prefer to.

systemctl start docker

Next I started the docker container, I am using ghost as an example here.

docker run -d --name xyz -p 127.0.0.1:9786:2368 ghost:1.21.4

Running the docker container in background. I am exposing the container’s port 2368 to the port 9786 of the localhost, (using ghost as an example in this case.)


sudo vim /etc/nginx/conf.d/xyz.anweshadas.in.conf

Now we have to set up nginx for the server name xyz.anweshadas.in, in a configuration file named xyz.anweshadas.in.conf. The configuration looks like this


server {
        listen 80;

        server_name xyz.anweshadas.in;

        location / {
                # proxy commands go here as in your port 80 configuration

                proxy_pass http://127.0.0.1:9786/;
                proxy_redirect off;
                proxy_set_header HOST $http_host;
                proxy_set_header X-NginX-Proxy true;
                proxy_set_header X-Real-IP $remote_addr;
                    }
}

In the above mentioned configuration we are receiving the http requests
on port 80. We are forwarding all the requests for xyz.anweshadas.in to the port 9786 of our localhost.

Before we can start nginx, we have to set up a SELinux boolean so that the nginx server can connect to any port on localhost.

setsebool httpd_can_network_connect 1 -P

systemctl start nginx

Now you will be able to see the ghost running at http://xyz.anweshadas.in.

To protect one’s security and privacy in the web sphere it is very important to know that the people or objects one is communicating with, are actually who they claim to be.
In such circumstances, TLS certificates is what we rely on. Let’s Encrypt is one such certificate authority, that provides certificates.

It provides certificates for Transport Layer Security (TLS) encryption via an automated process. Certbot is the client side tool (from the EFF) to get a certificate from Let’s Encrypt.

So we need a https (secure) certificate for our server by installing certbot.
Let’s get started

yum install certbot
mkdir -p /var/www/xyz.anweshadas.in/.well-known

We now need to make a directory named .well-known, in /var/www/xyz.anweshadas.in, where we will get the certificate for validation by Let’s Encrypt certificate.

chcon -R -t httpd_sys_content_t /var/www/xyz.anweshadas.in

This SELinux context of the directory, xyz.anweshadas.in.

Now we need to enable the access of the .well-known directory under our domain, that Let’s Encrypt can verify. The configuration of nginx, is as follows

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
                alias /var/www/xyz.anweshadas.in/.well-known;
        }

        location / {
                  # proxy commands go here as in your port 80 configuration

                  proxy_pass http://127.0.0.1:9786/;
                  proxy_redirect off;
                  proxy_set_header HOST $http_host;
                  proxy_set_header X-NginX-Proxy true;
                  proxy_set_header X-Real-IP $remote_addr;
         }

}
certbot certonly --dry-run --webroot -w /var/www/xyz.anweshadas.in/ -d xyz.anweshadas.in

We are performing a test run of the client, by obtaining the test certificates, through placing files in a webroot, but not actually saving them in the hard drive. To have a dry-run is important because the number of time one can get certificates for a particular domain a limited number of time (20 times in a week). All the subdomains under a particular domain are counted separately. To know more, go to the manual page of Certbot.

certbot certonly --webroot -w /var/www/xyz.anweshadas.in/ -d xyz.anweshadas.in

After running the dry-run successfully, we will rerun the command agian without dry-run to get the actual certificates. In the command we are providing the webroot using -w pointing to /var/www/xyz.anweshadas.in/ directory for the particular domain(-d) named xyz.anweshadas.in.

Let us add some more configuration to nginx, so that we can access the https version of our website.

vim /etc/nginx/conf.d/xyz.anweshadas.in.conf

The configuration looks like:

server {
    listen 443 ssl;

    # if you wish, you can use the below line for listen instead
    # which enables HTTP/2
    # requires nginx version >= 1.9.5
    # listen 443 ssl http2;

    server_name xyz.anweshadas.in;

    ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

    # Uncomment this line only after testing in browsers,
    # as it commits you to continuing to serve your site over HTTPS
    # in future
    # add_header Strict-Transport-Security "max-age=31536000";


    # maintain the .well-known directory alias for renewals
    location /.well-known {

        alias /var/www/xyz.anweshadas.in/.well-known;
    }

    location / {
        # proxy commands go here as in your port 80 configuration

        proxy_pass http://127.0.0.1:9786/;
        proxy_redirect off;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

To view https://xyz.anweshadas.in, reload nginx.

systemctl reload nginx

In case of any error, go to the nginx logs.

If everything works fine, then follow the below configuration.

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
            alias /var/www/xyz.anweshadas.in/.well-known;
        }

        rewrite ^ https://$host$request_uri? ;

}
server {
    listen 443 ssl;

    # if you wish, you can use the below line for listen instead
    # which enables HTTP/2
    # requires nginx version >= 1.9.5
    # listen 443 ssl http2;

    server_name xyz.anweshadas.in;

    ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;


    # Uncomment this line only after testing in browsers,
    # as it commits you to continuing to serve your site over HTTPS
    # in future
    #add_header Strict-Transport-Security "max-age=31536000";


    # maintain the .well-known directory alias for renewals
    location /.well-known {

        alias /var/www/xyz.anweshadas.in/.well-known;
    }

    location / {
    # proxy commands go here as in your port 80 configuration

    proxy_pass http://127.0.0.1:9786/;
    proxy_redirect off;
    proxy_set_header HOST $http_host;
    proxy_set_header X-NginX-Proxy true;
    proxy_set_header X-Real-IP $remote_addr;
    }
}

The final nginx configuration [i.e., the /etc/nginx/conf.d/xyz.anweshadas.in.conf] looks like the following, having the rewrite rule, forwarding all http requests to https. And uncommenting the “Strict-Transport-Security” header.

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
            alias /var/www/xyz.anweshadas.in/.well-known;
         }

        rewrite ^ https://$host$request_uri? ;

}

server {
        listen 443 ssl;

        # if you wish, you can use the below line for listen instead
        # which enables HTTP/2
        # requires nginx version >= 1.9.5
        # listen 443 ssl http2;

        server_name xyz.anweshadas.in;

        ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

        # Turn on OCSP stapling as recommended at
        # https://community.letsencrypt.org/t/integration-guide/13123
        # requires nginx version >= 1.3.7
        ssl_stapling on;
        ssl_stapling_verify on;

        # modern configuration. tweak to your needs.
        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        ssl_prefer_server_ciphers on;


        # Uncomment this line only after testing in browsers,
        # as it commits you to continuing to serve your site over HTTPS
        # in future
        add_header Strict-Transport-Security "max-age=31536000";


        # maintain the .well-known directory alias for renewals
        location /.well-known {

            alias /var/www/xyz.anweshadas.in/.well-known;
    }

        location / {
        # proxy commands go here as in your port 80 configuration

        proxy_pass http://127.0.0.1:9786/;
        proxy_redirect off;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_set_header X-Real-IP $remote_addr;
        }
}

So, now hopefully the website shows the desired content at the correct url now.

For this particular work, I am highly indebted to the Linux for You and Me, book, which actually introduced and made me comfortable with the Linux Command line.

by Anwesha Das at May 25, 2018 05:04 PM

May 11, 2018

Anwesha Das

Is PyCon 2018 your first PyCon?

Is PyCon 2018 your first PyCon? Then you must have had a sleepless night. You must be thinking “I will be lost in the gathering of 3500 people.” There should be a lot of mixed emotions and anticipation. Are you the only one who is thinking this way? Do not worry it is same with everyone. How can I assure that? I had my first PyCon US in 2017 and I too like you and everyone else had gone through the same feeling.

Registration:

registration

Once you enter the area the first thing you have to do is to register yourself. The people at the registration desk are really helpful so do not hesitate to ask your heart out. If there is a problem ever helpful Jackie will be there to guide you. (If you meet her please say her “hi” for me :) ). And if you are volunteering please welcome first timers specially, it really makes them feel at home.

The registration is done and you have the schedule now. Mark the talks you want to attend and their respective halls too. You might want to set an alarm for that, as you might tend to miss them being busy in the hallway tracks (trust me I have missed a few!).

So now what to do? What are the interesting things to do in PyCon?

Hallway tracks

hallway

Hallway tracks are the best place to find friends. For many people this is the core of the conference. Many people prefer to attend hallway tracks more than actual talks :). People gather on the hallway and discuss not only Python or programming but culture, politics, business, food several non connected topic. Choose a conversation you are comfortable with and join. You might get your next project idea there. The same rule applies to the lunch time also. Do not be shy to talk to the person next to you. You might find the person who wanted to meet. People are welcoming here. Ask them if you can join they would generally love the idea. If you are regular at PyCon please include a new PyCon attendee in your group :)

Booth visit

The sponsors are the people who makes the conference run. So visit them. You might find the new, interesting gig you are looking for. And yes do not forget to collect the cool swags.

booth

5k Fun Run/Walk

If you love to run, you may like to join the 5K run. Ashley, is there at the 5K Fun Run/ Walk booth (turn to the right of registration booth) to help.Please pick up your bib, shirt, and information on getting to the park!

Board game night

Inquire about the board game night, if you are interested.

PyLadies Lunch

It is the lunch by and for the PyLadies. A gathering of women who love to code in Python. If you consider yourself a PyLady do attend it, talk about your local PyLadies chapter, your hardels and success. You never know one of your personal story might inspire another PyLady to grow and face her own struggle. You will find similar minded people over there. And never miss to give shout out for the PyLady who in anyway had inspired you. You may always take names instead of a name. If you are there please raise a toast on my behalf for Naomi, Lorena, Ewa, Carol, Betsy,Katie, Lynn,Jackie and yourself too :). So register now.

No photo please

PyCon gives you the space/right to be anonymous, not to be clicked. If you do not want to be clicked please get out of the frame and convey your wish. You can also ask the person to delete the photo which mistakenly has you.

The pronoun you prefer

While registering take the 'The pronoun I prefer' badge.

PyLadies Auction

Saturday night is the PyLadies auction. Be a part of this fun fair (with a good cause). Read about it here.

Quite room

If you want to work, want to be left alone, need some space in the gathering of 3500 people find the quite room.

First time speaker?

Are you speaking for the first time in PyCon? Nervous? Do not want to leave any room for mistake? Want to rehears your talk? There is a speakers room to practice. Another easy way to rehears is, grab someone (whose opinion you value) and give the talk to her. This will give you a proper third eye view and tentative response of the audience. Last year Naomi helped me to do this. She sat with me for hours and corrected me. Never had a chance to say, “Thank you Naomi”.

Poster Presentation and Open Spaces

Do not forget to visit Poster Presentation and Open Spaces and know what is happening at the current Python world.

Code of Conduct

PyCon as [Peter] says is the “holy gathering of the Python tribe”, we all are the part of a diverse community, please respect that. Follow the Code of Conduct. This is the rule book and at all time, you have to abide by. If you have any issue please do not hesitate to contact the staff. Be rest assured that they will take required measures. Lastly do not hold yourself back form saying “sorry” and “thank you”. These two magically words can solve many problems.

One thing for sure your life after these 3days will be completely different. You will be back wealthy with knowledge, lovely memory and friends.

friendsatpycon2

PS: A huge shout out for the PyCon staff for working relentlessly over the year to put up this great PyCon to you. And thank you for coming a =nd attending PyCon and making it a great event it is.

by Anwesha Das at May 11, 2018 04:17 PM

April 20, 2018

Farhaan Bukhsh

Writing Chuck – Joke As A Service

Recently I really got interested to learn Go, and to be honest I found it to be a beautiful language. I personally feel that it has that performance boost factor from a static language background and easy prototype and get things done philosophy from dynamic language background.

The real inspiration to learn Go was these amazing number of tools written and the ease with which these tools perform although they seem to be quite heavy. One of the good examples is Docker. So I thought I would write some utility for fun, I have been using fortune, this is a Linux utility which gives random quotes from a database. I thought let me write something similar but let me do something with jokes, keeping this mind I was actually searching for what can I do and I landed up on jokes about Chuck Norris or as we say it facts about him. I landed up on chucknorris.io they have an API which can return different jokes about Chuck, and there it was my opportunity to put something up and I chose Go for it.

JSON PARSING

The initial version of the utility which I put together was way simple, it use to make a GET request stream the data in put in the given format and display the joke. But even with this implementation I learnt a lot of things, the most prominent one was how a variable is exported in Go i.e how can it be made available across scope and how to parse a JSON from a received response to store the beneficial information in a variable.

Now the mistake I was doing with the above code is I was declaring the fields of the struct with a small letters this caused a problem because although the value get stored in the struct I can’t use them outside the function I have declared it in. I actually took a while to figure it out and it was really nice to actually learn about this. I actually learnt about how to make a GET request and parse the JSON and use the given values.

Let’s walk through the code, the initial part is a struct and I have few fields inside it, the Category field is a slice of string, which can have as many elements as it receives the interesting part is the way you can specify the key from the received JSON how the value of received JSON is stored in the variable or the field of the struct. You can see the json:"categories" that is the way to do it.

With the rest of the code if you see I am making a GET request to the given URL and if the it returns a response it will be res and if it returns an error it will be handled by err. The key part here is how marshaling and unmarshaling of JSON takes place.

This is basically folding and un-folding JSON once that is done and the values are stored to retrieve the value we just use a dot notation and done. There is one more interesting part if you see we passed &joke which if you have a C background you will realize is passing the memory address, pass by reference, is what you are looking at.

This was working good and I was quite happy with it but there were two problems I faced:

  1. The response use to take a while to return the jokes
  2. It doesn’t work without internet

So I showed it to Sayan and he suggested why not to build a joke caching mechanism this would solve both the problems since jokes will be stored internally on the file system it will take less time to fetch and there is no dependency on the internet except the time you are caching jokes.

So I designed the utility in a way that you can cache as may number of jokes as you want you just have to run chuck --index=10 this will cache 10 jokes for you and will store it in a Database. Then from those jokes a random joke is selected and is shown to you.

I learnt to use flag in go and also how to integrate a sqlite3 database in the utility, the best learning was handling files, so my logic was anytime you are caching you should have a fresh set of jokes so when you cache I completely delete the database and create a new one for the user. To do this I need to check of the Database is already existing and if it is then remove it. I landed up looking for the answer on how to do that in Go, there are a bunch of inbuilt APIs which help you to do that but they were misleading for me. There is os.Stat, os.IsExist and os.IsNotExist. What I understood is os.Stat will give me the status of the file, while the other two can tell me if the file exists or it doesn’t, to my surprise things don’t work like that. The IsExist and IsNotExist are two different error wrapper and guess what not of IsExist is not IsNotExist, good luck wrapping your head around it. I eventually ended up answering this on stackoverflow.

After a few iteration of using it on my own and fixing few bugs the utility is ready except the fact that it is missing test cases which I will soon integrate, but this has helped me learn Go a lot and I have something fun to suggest to people. Well, I am open to contribution and hope you will enjoy this utility as much as I do.

Here is a link to chuck!

Give it a try and till then Happy Hacking and Write in GO! 

Featured Image: https://gopherize.me/

by fardroid23 at April 20, 2018 07:42 AM

April 04, 2018

Saptak Sengupta

What's the preferred unit in CSS for responsive design?

Obviously, it's %. What a stupid question? Well, you can sometimes use em maybe. Or, maybe vh/vw.  Well, anything except for px for sure.

Sadly, the answer isn't as simple as that. In fact, there is no proper answer to that question and depends a lot on the design decisions taken rather than a fixed rule of making websites responsive. Funny thing is, many times you might actually want to use px instead of % because the later is going to mess things up. In this blog, I will try to describe some scenario's when each of the units work better in a responsive environment.

PS: All of this is mostly my personal opinion and working experience. Not to be mistaken as a rule book.


Where to use %?

Basically, anywhere you have certain kind of layout or grid involved. I use % whenever I feel that the screen needs to be divided into proportions rather than fixed sizes. Let's say I have a side navigations area and the remaining body in a website. I would use % in this case to measure the margins and the area of distribution. Since I definitely want to vary them on changing screen. Or a grid of rows and columns. I will want the width of the grid to be in percentage so that the number of columns allowed change with the width of the screen.

Another use of percentage is while determining the margin of an element. You might want to have much more margin on a wider screen than in a smaller screen. Hence, often the margin-left and margin-right are advisable to be in percentage.

Also, for fonts and typography elements, you should use % since px isn't compatible with W3C standards of accessibility.

PS: A lot many times, it is preferable to use flexbox and grid layout instead of trying to layout things via margins and floats.

Where to use px?

To be honest, yes, it is better to avoid px when you think of making things fluid in responsive. But having said that, there are some cases where even in a fluid design, you want things to have a fixed value. One of the most commonly used examples is top navigation height. You don't want to change the height of the top navigation bar's height with the change in screen size. You might want the width to change or show a hamburger button instead of showing the list of hyperlinks, but you most often want to keep the height of the navigation bar fixed. You can either do it by setting height attribute of CSS or maybe you want to do it with padding, but the unit mostly should be px.

Another use would be for margin of an element but this time the top and bottom margins instead of left and right. Mostly, when you have your website divided into sections, you would want the margin between the different sections to be of a fixed value and not to change with the screen width.

Where to use em?

em is mainly to be used while setting font-sizes and typography elements. By that I mean wherever I have a text involved it is often good to use em. The size of em basically depends on the font-size of the parent element. So if the parent element has 100% font-size (default browser font-size), then 1 em = 16px. But em is a compounded measure. So the more and more nested elements you have the idea or measure of em keeps changing. So it can often be very tricky to work with em. But this is also a feature that might sometimes help to get a compounded font size.

Where to use rem?

The main difference between em and rem is rem depends on the font-size of the root element of a website(basically root-em) or the <html> element. So whatever be the font-size of the root element, rem is always computed based on that and hence unlike em, the computed pixel value is more uniform throughout the website. So rem and em usability depend highly on the use case.

Where to use vh or vw?

vh and vw stand for viewport height and viewport width. The advantage of vh or vw over width is that you can control based on both the height and the width of the media screen that appears in the viewport. 1vh basically means 1% of viewport height. So if you want something to take the entire height of the screen, you would use 100vh. This has applications in both setting width and also setting font-sizes. You can mention the font-size of a typography will either change based on the height or on the width of the viewport instead of the font-size of your parent element. Similarly, you can also set line height based on viewport height instead of the parent element measures.

Where to use media queries?

Even after using all of the above strategically, you will almost necessarily need to use media queries. Media queries are a more definitive set of CSS codes based on the actual media screen width of the device. Mostly, we use it in a conditional way similar to other programming languages. So we can mention if media screen width less than 720px, make this 10% wide, else make it 25% wide. Well, but why do we need it? The main reason for this is the aspect ratio of screens. In a desktop, the width of the screen is much more than the height and hence 25% wide might not occupy a whole lot of screen. However, in a mobile screen where the width is much smaller than the height, 25% might actually occupy more area than you want it to. Hence media queries are needed so that when there is a transition from wide screens to narrow screens, even the percentage widths are changed.


As far as I feel, there are use-cases and scenarios where each of them might be useful. Yes, px is the least used unit if you are concerned about responsiveness, but there will definitely be some elements on your website to which you want to give a fixed width or height. All the other measures are changing, but the way they change is different from each other and hence depends a lot on the designer and the frontend. Also, CSS4 has added a lot of new features which at least make handling of layout lot easier than before.

by SaptakS (noreply@blogger.com) at April 04, 2018 11:47 AM

April 02, 2018

Saptak Sengupta

FOSSASIA 2018: Conference Report


FOSSASIA 2018 was my 2nd FOSSASIA conference and this year it was at a new venue (Lifelong Learning Institute), and for a longer time. As always, there were a lot of speakers and a lot of really exciting sessions. Last year I was a little confused, so this year I planned earlier which all talks to attend and what all to do.

22nd March (1st Day)

The opening ceremony was kicked off by Harish Pillay and Damini Satya, both of whom did an incredible job on hosting the entire day. The opening ceremony was followed by the keynote talks and a panel discussion which lent a great insight into how open source, AI, blockchain and all other modern day technologies are working hand in hand with each other. Harish Pillay also shared his view how he thinks AI won't take over the human beings but rather human beings will evolve to become something which is a combination of human beings and AI and hopefully have a good future. I do agree with him to some extent.

Hong addressed the audience stating the primary focus of FOSSASIA in the next few years and how it involves helping more developers get involved in open source and making new cool things. Codeheat winners were awarded next for their wonderful contributions in different FOSSASIA projects. The mentors of the projects were also honored with medals, which was kind of something I wasn't expecting. Then, it was time for the track overviews to help people understand and know what the different tracks were all about. We told what the tracks were and why the audience should be interested. With that, it was time for the most important track - The Hallway Track. So people talked and networked in the exhibition area for the rest of the day.

23rd March (2nd Day)

I was the moderator of the Google Training Day and also the cloud track in one of the rooms. Which meant getting up early, and reaching there on time. Fortunately, I made it on time (I still don't know how). Being the moderator, I was there almost the entire day. Which meant a lot of Google Cloud learning for me. So the talks ranged from using BigQuery to handle queries in big data to using Cloud ML to do Machine Learning Stuff. The Google Training Day talks were followed by a talk on serverless computing and tutorial on kubernetes. After that, it was again time to hang out in the exhibition area and talk with people.

24th March (3rd Day)

Today was the day of my talk. I was pretty worried the night before whether I would be able to make it to my own talk since it was at 9.30 in the morning. I did make it to my talk. But what was more surprising was, there were actually more people than I expected at 9.30 in the morning which was great. Apart from few technical glitches in the middle of my talk, everything went pretty smoothly. I talked about how we at Open Event decoupled the architecture to have a separate backend and frontend now and how it's really helpful for development and maintenance. I also gave a brief overview of the various architectures involved and the code and file structures.

After finishing my talk, I attended the SELinux talk by Jason Zaman. SELinux is a very confusing and mystified topic for most people and there was no way I was missing this talk. He gave a hands-on about setting up SELinux policy and how to use audit logs. Next was the all-women panel about open source and tech. After this was the necessary group photo where the number of participants made it a little too difficult for the photographer.

The remaining of the day was pretty involving where I mentored in the UNESCO hackathon, helped with video recording and so on.

25th March (4th Day)

The final day of the event. I was really interested in attending the talk about Open Source Design by Victoria and hence reached the venue by 10 am in the morning. It was a great insight as to how Open Source Design is involving and bringing in more and more designers into open-source which is really great. The last session I was eagerly waiting for was the GPG/PGP key signing event. Had a lot of fun helping people create their first GPG/PGP keys and signing. Met and interacted with some really awesome people there.



At last, it was time for the conference closing ceremony. But it wasn't over yet. We all met over at hackerspace where I had some great discussions with people about the different projects I work on and was really great to have their views.

All in all, it was really great meeting old friends, making new friends and meeting people whom I actually knew only by their nick. More than the talks in itself what makes a great conference is the people in it and the chance to meet them once in a year. At least that's how I see them. And FOSSASIA 2018 met that purpose wonderfully.

by SaptakS (noreply@blogger.com) at April 02, 2018 05:14 AM

March 14, 2018

Sanyam Khurana

International Women's Day with WoMoz in Delhi

We all know that every year, 8th March is celebrated as International Women's Day. It is a focal point in the movement for women's rights. On this occasion, all the Open Source communities in and around Delhi came forward to hold a mega-meetup to encourage more women to take active part in Open Source & Tech on March 10, 2018.

We were astonished to see the huge turn around of 180 people including 150+ women participants.

Group photo of all attendees

Mozilla Delhi, PyDelhi, PyLadies Delhi, LinuxChix India, Women Who Code Delhi, Women Who Go Delhi, Women Techmakers Delhi, and Women in Machine Learning and Data Science were the communities helped up in shaping the event.

Here are some of the volunteers who helped to make the event possible

Group photo of volunteers

We had 3 main technical talks which were all presented by Women having a decade of experience working in the technical field. Apart from that, we had several other lightening talks and community talks.

Group photo of volunteers

Kanika gave a lightening talk on "WoMoz" & encouraged students to contribute to Mozilla.

Group photo of volunteers

Later, I got a chance to give a lightening talk on "Why you should contribute to Open Source", to help & encourage folks to contribute to Open Source projects.

Group photo of volunteers

I want to thank everyone who helped with the event & Adobe for sponsoring the venue. Don't forget to join the Open Source groups in & around Delhi that you're interested in. As always, if you need any help on contributing, drop me a mail at Sanyam [at] SanyamKhurana [dot] com.

You can check out more photos of the event from here

by Sanyam Khurana at March 14, 2018 09:21 PM

March 01, 2018

Sanyam Khurana

MozAMU: Mozilla Addons Development at AMU

It all started with PyCon India. I met a few students of Aligarh Muslim University who were trying to teach about FOSS in their college. A few them were already contributing to Coala. We talked a bit and they were discussing problems they were facing to run that community. I already started a Mozilla community at my college in the earlier days. Since I brought up a lot of folks from my college community in the event, we together discussed How to nurture FOSS communities in college as part of PyCon India Open Spaces. Here is a glimpse of the same:

PyCon India Open Spaces on how to nurture FOSS communities in college

Later, these folks were excited and invited me as a speaker at their college. A lot of planning happened for the event and we were in touch almost daily for different things. We planned the event one and a half month in advance. Finally, on 24th Feb 2018, we decided to have a full-day event around FOSS at Aligarh Muslim University.

We left from Delhi at around 6:30 AM in the morning. We then halted at a cafe known as Break Point around Aligarh to have breakfast. We reached the university at around 10:20 AM.

We took some time to test the entire set-up and the event began at around 11:30 AM. I took the first session on Why you should contribute to Open Source. We discussed the question that somehow pops up sooner or later into everyone's mind while contributing to Open Source: What's in it for me?.

We discussed various pathways one can begin contributing to Open Source Projects like coding, writing docs, managing team, advocacy, documentation, translation, bug triaging, reviews, organization skills, soft skills & tons of other things that come as a by-product.

Curious & enthusiastic attendees learning about different contribution pathways in FOSS

You can find the slides here: Why you should contribute to Open Source?.

Curious & enthusiastic attendees at Mozilla AMU

We had a break for sometime and then started with the much awaited Add-ons session. Everyone was very excited to start learning to develop add-ons. I began with explaining the initial setup details & discussed JSON briefly to bring everyone on the same page. To make most out of the event, We also organized a few events previously to teach basics of JavaScript & JSON so that students so that they do not feel overwhelmed with the add-ons development.

Curious & enthusiastic attendees learning from Sanyam Khurana (CuriousLearner)

Then Shashank (@realslimshanky) took over and discussed manifest.json and it's importance. Later we developed a simple addon -- borderify, which displays a border on every site the user visits. Some students also modified their scripts to make their add-ons do different things and posted them on twitter. Shashank, me and Shivam Singhal (@championshuttler) helped everyone with their problems during the development phase.

Curious & enthusiastic attendees learning from Sanyam Khurana (CuriousLearner)

Since all of us are devs, we were able to quickly resolve queries of students. One of the most important things I noticed is that often people misspell either the name of their manifest file or some key in their manifest people resulting their add-on to not load during the debugging phase.

A lot of them then created addons on their own and modified their previously created borderify addon to do more stuff. I've tried to collect some of them here. For a more verbose list you can visit Twitter and search tweets tagging me (@ErSanyamKhurana with #MozAMU.

Imaginations ran wild and one of the attendee created an addon that replaces the word Google with Mozilla on every web page. You can see his hack here

To make the session more interesting we gave add-ons stickers to anyone who answers the question about what we were just telling them. A lot of folks praised and tweeted about the addons they were generating. We were able to generate 10,000+ impressions with more than 4,000+ accounts reached on Twitter for #MozAMU.

Twitter Outreach report on no of impressions created for #MozAMU Twitter Outreach report on top contributors created for #MozAMU

You can read the full report here.

Till this time, I never introduced to anyone about who am I and most importantly not listed any of my contributions in any of the projects. I don't want them to feel overwhelmed and assume that we people have some sort of superpowers that we're able to patch bugs in any FOSS project. I always make it a point to encourage them and help them land their first patch.

And my introduction summed that up in just one line I'm one of you -- a part of the community.

Sanyam Khurana (CuriousLearner) teaching about essential things to learn while contributing to FOSS projects

We then hopped on to discuss How do I start contributing to Open Source? where we specifically discussed How to find bugs on different projects through Bugzilla & Bugsahoy.

Enthusiastic folks learning about contributions for Sanyam Khurana (CuriousLearner)

You can find the slides here. Then I discussed other Open Source projects I've contributed to like CPython, Django, Oppia, Mozilla's Devtools, Gecko-Engine & tried to find out similarities in various bug trackers. We then had a group photo with some of the attendees.

Group photo at Mozilla AMU

It was already 5 PM and we didn't go for Lunch since students kept us busy with their questions during the lunch break too :P

Curious & enthusiastic students asking questions in break

So, we decided to hop-on to a restaurant nearby with the core-team of students that helped in organizing the event with so much enthusiasm.

Curious & enthusiastic attendees at Mozilla AMU

Then we clicked one last photo before leaving Aligarh at around 7:00 PM with the core-volunteers of the event who helped in all the preparations for the event.

Group photo Mozilla AMU

In the end, I would like to congratulate the students for making so wonderful arrangements and pushing up the FOSS community in their college. I hope they will now start landing patches in different FOSS projects & we'll all meet again soon.

by Sanyam Khurana at March 01, 2018 11:41 AM