Planet dgplug

July 20, 2018

Shakthi Kannan

Elixir Workshop: MVJ College of Engineering, Bengaluru

I had organized a hands-on scripting workshop using the Elixir programming language for the Computer Science and Engineering department, MVJ College of Engineering, Whitefield, Bengaluru on May 5, 2018.

Elixir scripting session

The department were interested in organizing a scripting workshop, and I felt using a new programming language like Elixir with the power of the Erlang Virtual Machine (VM) will be a good choice. The syntax and semantics of the Elixir language were discussed along with the following topics:

  • Basic types
  • Basic operators
  • Pattern matching
  • case, cond and if
  • Binaries, strings and char lists
  • Keywords and maps
  • Modules and functions

Students had setup Erlang and Elixir on their laptops, and tried the code snippets in the Elixir interpreter. The complete set of examples are available in the following repo:

https://gitlab.com/shakthimaan/elixir-scripting-workshop

A group photo was taken at the end of the workshop.

Elixir scripting session

I would like to thank Prof. Karthik Myilvahanan J for working with me in organizing this workshop.

July 20, 2018 01:00 PM

July 19, 2018

Jason Braganza (Personal)

Daily Writing, 70

_MG_3825


I will love the light for it shows me the way, yet I will endure the darkness because it shows me the stars.

Og Mandino

by Mario Jason Braganza at July 19, 2018 07:27 AM

Daily Writing, 69

_MG_3806


Tomb of Flavia Iulia Helena Augusta, Empress of the first Christian Empire.

Memento, homo, quia pulvis es, et in pulverem reverteris.

“Remember, man, that thou art dust, and to dust thou shalt return.”

by Mario Jason Braganza at July 19, 2018 07:05 AM

July 17, 2018

Jason Braganza (Work)

Programming, Day 31

  • Got back on the Python horse.
  • Using the PYM book to learn along, in the DGPLUG Summer Training.
  • Installed MU.
  • Hanging on for dear life and trying to follow along.

by Mario Jason Braganza at July 17, 2018 03:23 PM

July 16, 2018

Jason Braganza (Work)

Programming, Day 30, Ansible

I finally got tired of rebuilding my servers from scratch everytime.
It hadn’t troubled me enough to do something about it, until recently.
I got myself a pc to do linux development on and I keep nuking the os and reinstalling.

Rebuilding it over and over was exciting in the beginning and then it sudddenly began to grate on my nerves.
So I decided to put in my twenty hours after my break and learn Ansible.

I know Ansible!


What it is at its heart, is a recording & playback engine for setting up computers.
You record the steps you usually do, in a text file, using a language called YAML on your mac or pc or what have you.
And then you playback those actions on your server or target pc.
Ansible gives you primitives, the basic building blocks, called modules to do just about anything you wish.

In my case,

  • I setup a barebones server running Bionic Beaver and configured it for ssh access
  • Everything from then on, was controlled by the YAML playbook I was building step by step
  • I updated the machine
  • I configured three users
  • I setup UFW & Fail2Ban
  • NGINX was next
  • I configured my 4 little play subdomains
  • And finally configured Letsencrypt and enabled SSL

And that’s about all I wanted from my basic machine so far.
Running the script start to finish takes about 30 mins and I have a machine ready to go!
Doing all that by hand is fraught with errors and takes me nearly half a day.

Every other task I need done now, I’ll start doing via Ansible.

There’s obviously lots more to learn. The playbook (my recording) started from nothing and has now grown to an unwieldy 200 odd lines.
I can hive them off into other files and call them seperately.
I can optimize what I’ve written, and make it portable, so that I can setup any server I wish, not just mine.

But all that is for later.

I did this, so I could have a machine to trash and rebuild quickly.
Now that I have one, Python, here I come.

P.S.
It also gave me a small sense of how coding actually works.
It was slow steady progress.
Building a bit, testing, iterating, tinkering and playing.
And at the end of the day, I have something that I can call my own, something I built and something that makes my work easier.
By Jove, this is going to be fun!

by Mario Jason Braganza at July 16, 2018 03:53 PM

July 11, 2018

Kushal Das

Using podman for containers

Podman is one of the newer tool in the container world, it can help you to run OCI containers in pods. It uses Buildah to build containers, and runc or any other OCI compliant runtime. Podman is being actively developed.

I have moved the two major bots we use for dgplug summer training (named batul and tenida) under podman and they are running well for the last few days.

Installation

I am using a Fedora 28 system, installation of podman is as simple as any other standard Fedora package.

$ sudo dnf install podman

While I was trying out podman, I found it was working perfectly in my DigitalOcean instance, but, not so much on the production vm. I was not being able to attach to the stdout.

When I tried to get help in #podman IRC channel, many responded, but none of the suggestions helped. Later, I gave access to the box to Matthew Heon, one of the developer of the tool. He identified the Indian timezone (+5:30) was too large for the timestamp buffer and thus causing this trouble.

The fix was pushed fast, and a Fedora build was also pushed to the testing repo.

Usage

To learn about different available commands, visit this page.

First step was to build the container images, it was as simple as:

$ sudo podman build -t kdas/imagename .

I reused my old Dockerfiles for the same. After this, it was just simple run commands to start the containers.

by Kushal Das at July 11, 2018 07:05 AM

July 10, 2018

Farhaan Bukhsh

Benchmarking MongoDB in a container

The database layer for an application is one of the most crucial part because believe it or not it effects the performance of your application, now with micro-services getting the attention I was just wondering if having a database container will make a difference.

As we have popularly seen most of the containers used are stateless containers that means that they don’t retain the data they generate but there is a way to have stateful containers and that is by mounting a host volume in the container. Having said this there could be an issue with the latency in the database request, I wanted to measure how much will this latency be and what difference will it make if the installation is done natively verses if the installation is done in a container.

I am going to run a simple benchmarking scheme I will make 200 insert request that is write request keeping all other factors constant and will plot the time taken for these request and see what comes out of it.

I borrowed a quick script to do the same from this blog. The script is simple it just uses pymongo the python MongoDB driver to connect to the database and make 200 entries in a random database.

import time
import pymongo
m = pymongo.MongoClient()

doc = {'a': 1, 'b': 'hat'}

i = 0

while (i < 200):

start = time.time()
m.tests.insertTest.insert(doc, manipulate=False, w=1)
end = time.time()

executionTime = (end - start) * 1000 # Convert to ms

print executionTime

i = i + 1

So I went to install MongoDB natively first I ran the above script twice and took the second result into consideration. Once I did that I plotted the graph with value against the number of request. The first request takes time because it requires to make connection and all the over head and the plot I got looked like this.

 

NativeMongoDb Native Time taken in ms v/s Number of request

The graph shows that the first request took about 6 ms but the consecutive requests took way lesser time.

Now it was time I try the same to do it in a container so I did a docker pull mongo and then I mounted a local volume in the container and started the container by

docker run --name some-mongo -v /Users/farhaanbukhsh/mongo-bench/db:/data/db -d mongo

This mounts the volume I specified to /data/db in the container then I did a docker cp of the script and installed the dependencies and ran the script again twice so that file creation doesn’t manipulate the time.

To my surprise the first request took about 4ms but subsequent requests took a lot of time.

ContaineredMongoDB running in a container(Time in ms v/s Number of Requests)

 

And when I compared them the time time difference for each write or the latency for each write operation was ​considerable.

MongoDB bench markComparison between Native and Containered MongoDB

I had this thought that there will be difference in time and performance but never thought that it would be this huge, now I am wondering what is the solution to this performance issue, can we reach a point where the containered performance will be as good as native.

Let me know what do you think about it.

Happy Hacking!

by fardroid23 at July 10, 2018 08:48 PM

July 05, 2018

Robin Schubert

How to browse blocked sites with Adblocker

People that are browsing the web with ad-blockers often find themselves on websites that gray out, block scrolling and show a modal dialog that kindly suggests to switch off the ad-blocker or to whitelist that particular page.

Here's a little work-around how you can continue browsing most of those sites without whitelisting the page or turning off the ad-blocker, by live-editing the HTML.

So whenever you see a banner like this - I've come across this a dozen times now, this is an example from a German online news magazine - open the Web Inspector. There are multiple ways to do this; in Firefox you can open Tools -> Web Developer -> Inspector, or using Chromium it would be Menu -> More tools -> Developer tools, or just hit F12.

I like right-clicking the object I want to inspect and select Inspect element.

So when I inspect the modal dialog and follow the DOM a bit upwards, I find the corresponding <div> tag that describes the dialog. Note the style="display: block;" css rule.

Since we don't want to see this dialog at all, right-click that html element and simply delete the whole node.

Schlurps and the dialog is gone. However, we still have that gray veil. In this example, the responsible <div> tag is the one just right above the previous modal dialog tag. Again we find the style="display: block;" rule and again we simply delete that node.

Finally the website looks almost normal. But shoot! scrolling is deactivated. If you happen to use Vim keybindings for navigating in your browser, you might not even notice. However, to be able to scroll with your mouse or arrow keys, find the <body> tag way up the DOM.

You may have guessed it: Right we're not going to delete the <body> tag. Notice the css lines "overflow-y: hidden; height: 911px;". This hides the scroll bar and sets a fixed height to what seems to be my browser window height. You can either delete that css, or - if you want to - modify to something like "overflow-y: auto; height: 100%;" and you should be browsing that site without ads and annoying modals.

by Robin Schubert at July 05, 2018 12:00 AM

How to browse blocked sites with Adblocker

People that are browsing the web with ad-blockers often find themselves on websites that gray out, block scrolling and show a modal dialog that kindly suggests to switch off the ad-blocker or to whitelist that particular page.

Here's a little work-around how you can continue browsing most of those sites without whitelisting the page or turning off the ad-blocker, by live-editing the HTML.

So whenever you see a banner like this - I've come across this a dozen times now, this is an example from a German online news magazine - open the Web Inspector. There are multiple ways to do this; in Firefox you can open Tools -> Web Developer -> Inspector, or using Chromium it would be Menu -> More tools -> Developer tools, or just hit F12.

I like right-clicking the object I want to inspect and select Inspect element.

So when I inspect the modal dialog and follow the DOM a bit upwards, I find the corresponding <div> tag that describes the dialog. Note the style="display: block;" css rule.

Since we don't want to see this dialog at all, right-click that html element and simply delete the whole node.

Schlurps and the dialog is gone. However, we still have that gray veil. In this example, the responsible <div> tag is the one just right above the previous modal dialog tag. Again we find the style="display: block;" rule and again we simply delete that node.

Finally the website looks almost normal. But shoot! scrolling is deactivated. If you happen to use Vim keybindings for navigating in your browser, you might not even notice. However, to be able to scroll with your mouse or arrow keys, find the <body> tag way up the DOM.

You may have guessed it: Right we're not going to delete the <body> tag. Notice the css lines "overflow-y: hidden; height: 911px;". This hides the scroll bar and sets a fixed height to what seems to be my browser window height. You can either delete that css, or - if you want to - modify to something like "overflow-y: auto; height: 100%;" and you should be browsing that site without ads and annoying modals.

by Robin Schubert at July 05, 2018 12:00 AM

June 24, 2018

Farhaan Bukhsh

Debugging Python with Visual Studio Code

I have started using Visual Studio Code, and to be honest, I feel it’s one of the best IDEs in the market. I’m still a Vimmer; given a chance I still use VIM for small edits or carrying out nifty text transformations. After Vim, the next tool that has really impressed me is VSC; the innovations the team are doing, the utility that it provides is almost a super power.

This post is regarding one of the utilities that I have been using very recently. This is a skill that I have been trying to harness for a long time. For every person who writes code there comes a time where they need to figure out what is going wrong;  there’s a need to debug the code.
The most prominent and well used debugging tools are print statements. To be really honest, it doesn’t feel (to me) quite right to use print statements to debug my code, but that’s the most handy way to figure out the flow and inspect each variable. I’ve tried a lot of debuggers and it alway feels like extra effort to actually take a step up and use them. This could be one of the reasons I might have not used them very intensively. (Although I have used pudb extensively.)

But, with VS Code, the way debugger is integrated in really well. It feels very natural to use it. Recently when I was working on few scripts and was trying to debug them, I went on exploring a little more with the python debugger in VS Code.

So I have this script and I want to run the debugger or it. You hit ctrl + alt + p, this opens the the command drop down, just type debug and you will see the option,  Debug and start debugging.

 

Screenshot 2018-06-24 22.45.31

 

This actually creates a launch.json file in your project. You can put all your configuration in here. We’ll edit the config file as we go; since it is not a Django or Flask project we will use the current file configuration. That looks like this:

{

"name":"Python: Current File",

"type":"python",

"request":"launch",

"program":"${file}"

}

You can set the pythonPath here if you are using a virtual environment, name sets the name of the configuration, type is the type of file, that is being debugged it, and  request can be used to debug it in different ways. Let’s make our configs more customised,
{

"name":"Facebook Achieve Debug",

"type":"python",

"request":"launch",

"program": "${flle}"

}
Screenshot 2018-06-25 00.23.42
If you observe there’s a red dot at line 50.  That is called the breakpoint and that is where the program will stop and you will be able to observe variables and see the flow of the program.
Let’s see what the screen looks like when you do that,
Screenshot 2018-06-25 00.34.34
This is the editor in full flow, you could see the stack that is being followed, you can also go and inspect each variable.
With the debug console (lower right pane) you can even run some code that you want to run or to inspect the same. Now, let us look at the final config and see what is going on.
{

 "name":"Python: Current File",

 "type":"python",

 "request":"launch",

"program":"${file}",

 "pythonPath":"/Users/farhaanbukhsh/.virtualenvs/facebook_archieve/bin/python",

 "args":[

    "--msg",

    "messages"

   ]

}

If you observe I have the pythonPath set to my ​virtualenv and I have one more argument which is args which is the command-line  argument that has to be passed to the script.
I still use print statement sometimes but I have made it  a sure point to start using the debugger as early as possible because, believe it or not, this definitely helps a lot and saves time.

by fardroid23 at June 24, 2018 07:31 PM

May 30, 2018

Kushal Das

Tor Browser and Selenium

Many of us use Python Selenium to do functional testing of our websites or web applications. We generally test against Firefox and Google Chrome browser on the desktop. But, there is also a lot of people who uses Tor Browser (from Tor Project) to browse the internet and access the web applications.

In this post we will see how can we use the Tor Browser along with Selenium for our testing.

Setting up the environment

First step is to download and verify, and then extract the Tor Browser somewhere in your system. Next, download and extract geckodriver 0.17.0 somewhere in the path. For the current series of Tor Browsers, you will need this particular version of the geckodriver.

We will use pipenv to create the Python virtualenv and also to install the dependencies.

$ mkdir tortests
$ cd tortests
$ pipenv install selenium tbselenium
$ pipenv shell

The tor-browser-selenium is Python library required for Tor Browser Selenium tests.

Example code

import unittest
from time import sleep
from tbselenium.tbdriver import TorBrowserDriver


class TestSite(unittest.TestCase):
    def setUp(self):
        # Point the path to the tor-browser_en-US directory in your system
        tbpath = '/home/kdas/.local/tbb/tor-browser_en-US/'
        self.driver = TorBrowserDriver(tbpath, tbb_logfile_path='test.log')
        self.url = "https://check.torproject.org"

    def tearDown(self):
        # We want the browser to close at the end of each test.
        self.driver.close()

    def test_available(self):
        self.driver.load_url(self.url)
        # Find the element for success
        element = self.driver.find_element_by_class_name('on')
        self.assertEqual(str.strip(element.text),
                         "Congratulations. This browser is configured to use Tor.")
        sleep(2)  # So that we can see the page


if __name__ == '__main__':
    unittest.main()

In the above example, we are connecting to the https://check.torproject.org and making sure that it informs we are connected over Tor. The tbpath variable in the setUp method contains the path to the Tor Browser in my system.

You can find many other examples in the source repository.

Please make sure that you test web application against Tor Browser, having more applications which can run smoothly on top of the Tor Browser will be a great help for the community.

by Kushal Das at May 30, 2018 04:39 AM

May 25, 2018

Anwesha Das

How to use Let’s Encrypt with nginx and docker

In my last blog post, I shared the story on how did I set my server up. I mentioned that I’d be writing about getting SSL certificates, so here you go.

When I started working on a remote server somewhere out there in the globe and letting that come in into my private space, (my home machine) I realised I needed to be much more careful, and secure.

The first step to attain security was to set up a firewall to control unwanted incoming intrusions.
The next step was to create a reverse proxy in nginx :

Let us assume we’re running a docker container, a CentOS 7 host, using the latest ghost image. So first, one has to install docker, nginx and start the docker service:

yum install docker nginx epel-release vim -y

Along with docker and nginx we are also installing epel-release from which we will later get Certbot, for the next part of our project and vim if you prefer to.

systemctl start docker

Next I started the docker container, I am using ghost as an example here.

docker run -d --name xyz -p 127.0.0.1:9786:2368 ghost:1.21.4

Running the docker container in background. I am exposing the container’s port 2368 to the port 9786 of the localhost, (using ghost as an example in this case.)


sudo vim /etc/nginx/conf.d/xyz.anweshadas.in.conf

Now we have to set up nginx for the server name xyz.anweshadas.in, in a configuration file named xyz.anweshadas.in.conf. The configuration looks like this


server {
        listen 80;

        server_name xyz.anweshadas.in;

        location / {
                # proxy commands go here as in your port 80 configuration

                proxy_pass http://127.0.0.1:9786/;
                proxy_redirect off;
                proxy_set_header HOST $http_host;
                proxy_set_header X-NginX-Proxy true;
                proxy_set_header X-Real-IP $remote_addr;
                    }
}

In the above mentioned configuration we are receiving the http requests
on port 80. We are forwarding all the requests for xyz.anweshadas.in to the port 9786 of our localhost.

Before we can start nginx, we have to set up a SELinux boolean so that the nginx server can connect to any port on localhost.

setsebool httpd_can_network_connect 1 -P

systemctl start nginx

Now you will be able to see the ghost running at http://xyz.anweshadas.in.

To protect one’s security and privacy in the web sphere it is very important to know that the people or objects one is communicating with, are actually who they claim to be.
In such circumstances, TLS certificates is what we rely on. Let’s Encrypt is one such certificate authority, that provides certificates.

It provides certificates for Transport Layer Security (TLS) encryption via an automated process. Certbot is the client side tool (from the EFF) to get a certificate from Let’s Encrypt.

So we need a https (secure) certificate for our server by installing certbot.
Let’s get started

yum install certbot
mkdir -p /var/www/xyz.anweshadas.in/.well-known

We now need to make a directory named .well-known, in /var/www/xyz.anweshadas.in, where we will get the certificate for validation by Let’s Encrypt certificate.

chcon -R -t httpd_sys_content_t /var/www/xyz.anweshadas.in

This SELinux context of the directory, xyz.anweshadas.in.

Now we need to enable the access of the .well-known directory under our domain, that Let’s Encrypt can verify. The configuration of nginx, is as follows

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
                alias /var/www/xyz.anweshadas.in/.well-known;
        }

        location / {
                  # proxy commands go here as in your port 80 configuration

                  proxy_pass http://127.0.0.1:9786/;
                  proxy_redirect off;
                  proxy_set_header HOST $http_host;
                  proxy_set_header X-NginX-Proxy true;
                  proxy_set_header X-Real-IP $remote_addr;
         }

}
certbot certonly --dry-run --webroot -w /var/www/xyz.anweshadas.in/ -d xyz.anweshadas.in

We are performing a test run of the client, by obtaining the test certificates, through placing files in a webroot, but not actually saving them in the hard drive. To have a dry-run is important because the number of time one can get certificates for a particular domain a limited number of time (20 times in a week). All the subdomains under a particular domain are counted separately. To know more, go to the manual page of Certbot.

certbot certonly --webroot -w /var/www/xyz.anweshadas.in/ -d xyz.anweshadas.in

After running the dry-run successfully, we will rerun the command agian without dry-run to get the actual certificates. In the command we are providing the webroot using -w pointing to /var/www/xyz.anweshadas.in/ directory for the particular domain(-d) named xyz.anweshadas.in.

Let us add some more configuration to nginx, so that we can access the https version of our website.

vim /etc/nginx/conf.d/xyz.anweshadas.in.conf

The configuration looks like:

server {
    listen 443 ssl;

    # if you wish, you can use the below line for listen instead
    # which enables HTTP/2
    # requires nginx version >= 1.9.5
    # listen 443 ssl http2;

    server_name xyz.anweshadas.in;

    ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;

    # Uncomment this line only after testing in browsers,
    # as it commits you to continuing to serve your site over HTTPS
    # in future
    # add_header Strict-Transport-Security "max-age=31536000";


    # maintain the .well-known directory alias for renewals
    location /.well-known {

        alias /var/www/xyz.anweshadas.in/.well-known;
    }

    location / {
        # proxy commands go here as in your port 80 configuration

        proxy_pass http://127.0.0.1:9786/;
        proxy_redirect off;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

To view https://xyz.anweshadas.in, reload nginx.

systemctl reload nginx

In case of any error, go to the nginx logs.

If everything works fine, then follow the below configuration.

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
            alias /var/www/xyz.anweshadas.in/.well-known;
        }

        rewrite ^ https://$host$request_uri? ;

}
server {
    listen 443 ssl;

    # if you wish, you can use the below line for listen instead
    # which enables HTTP/2
    # requires nginx version >= 1.9.5
    # listen 443 ssl http2;

    server_name xyz.anweshadas.in;

    ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

    # Turn on OCSP stapling as recommended at
    # https://community.letsencrypt.org/t/integration-guide/13123
    # requires nginx version >= 1.3.7
    ssl_stapling on;
    ssl_stapling_verify on;

    # modern configuration. tweak to your needs.
    ssl_protocols TLSv1.2;
    ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
    ssl_prefer_server_ciphers on;


    # Uncomment this line only after testing in browsers,
    # as it commits you to continuing to serve your site over HTTPS
    # in future
    #add_header Strict-Transport-Security "max-age=31536000";


    # maintain the .well-known directory alias for renewals
    location /.well-known {

        alias /var/www/xyz.anweshadas.in/.well-known;
    }

    location / {
    # proxy commands go here as in your port 80 configuration

    proxy_pass http://127.0.0.1:9786/;
    proxy_redirect off;
    proxy_set_header HOST $http_host;
    proxy_set_header X-NginX-Proxy true;
    proxy_set_header X-Real-IP $remote_addr;
    }
}

The final nginx configuration [i.e., the /etc/nginx/conf.d/xyz.anweshadas.in.conf] looks like the following, having the rewrite rule, forwarding all http requests to https. And uncommenting the “Strict-Transport-Security” header.

server {
        listen 80;

        server_name xyz.anweshadas.in;

        location /.well-known {
            alias /var/www/xyz.anweshadas.in/.well-known;
         }

        rewrite ^ https://$host$request_uri? ;

}

server {
        listen 443 ssl;

        # if you wish, you can use the below line for listen instead
        # which enables HTTP/2
        # requires nginx version >= 1.9.5
        # listen 443 ssl http2;

        server_name xyz.anweshadas.in;

        ssl_certificate /etc/letsencrypt/live/xyz.anweshadas.in/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/xyz.anweshadas.in/privkey.pem;

        # Turn on OCSP stapling as recommended at
        # https://community.letsencrypt.org/t/integration-guide/13123
        # requires nginx version >= 1.3.7
        ssl_stapling on;
        ssl_stapling_verify on;

        # modern configuration. tweak to your needs.
        ssl_protocols TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
        ssl_prefer_server_ciphers on;


        # Uncomment this line only after testing in browsers,
        # as it commits you to continuing to serve your site over HTTPS
        # in future
        add_header Strict-Transport-Security "max-age=31536000";


        # maintain the .well-known directory alias for renewals
        location /.well-known {

            alias /var/www/xyz.anweshadas.in/.well-known;
    }

        location / {
        # proxy commands go here as in your port 80 configuration

        proxy_pass http://127.0.0.1:9786/;
        proxy_redirect off;
        proxy_set_header HOST $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_set_header X-Real-IP $remote_addr;
        }
}

So, now hopefully the website shows the desired content at the correct url now.

For this particular work, I am highly indebted to the Linux for You and Me, book, which actually introduced and made me comfortable with the Linux Command line.

by Anwesha Das at May 25, 2018 05:04 PM

May 11, 2018

Anwesha Das

Is PyCon 2018 your first PyCon?

Is PyCon 2018 your first PyCon? Then you must have had a sleepless night. You must be thinking “I will be lost in the gathering of 3500 people.” There should be a lot of mixed emotions and anticipation. Are you the only one who is thinking this way? Do not worry it is same with everyone. How can I assure that? I had my first PyCon US in 2017 and I too like you and everyone else had gone through the same feeling.

Registration:

registration

Once you enter the area the first thing you have to do is to register yourself. The people at the registration desk are really helpful so do not hesitate to ask your heart out. If there is a problem ever helpful Jackie will be there to guide you. (If you meet her please say her “hi” for me :) ). And if you are volunteering please welcome first timers specially, it really makes them feel at home.

The registration is done and you have the schedule now. Mark the talks you want to attend and their respective halls too. You might want to set an alarm for that, as you might tend to miss them being busy in the hallway tracks (trust me I have missed a few!).

So now what to do? What are the interesting things to do in PyCon?

Hallway tracks

hallway

Hallway tracks are the best place to find friends. For many people this is the core of the conference. Many people prefer to attend hallway tracks more than actual talks :). People gather on the hallway and discuss not only Python or programming but culture, politics, business, food several non connected topic. Choose a conversation you are comfortable with and join. You might get your next project idea there. The same rule applies to the lunch time also. Do not be shy to talk to the person next to you. You might find the person who wanted to meet. People are welcoming here. Ask them if you can join they would generally love the idea. If you are regular at PyCon please include a new PyCon attendee in your group :)

Booth visit

The sponsors are the people who makes the conference run. So visit them. You might find the new, interesting gig you are looking for. And yes do not forget to collect the cool swags.

booth

5k Fun Run/Walk

If you love to run, you may like to join the 5K run. Ashley, is there at the 5K Fun Run/ Walk booth (turn to the right of registration booth) to help.Please pick up your bib, shirt, and information on getting to the park!

Board game night

Inquire about the board game night, if you are interested.

PyLadies Lunch

It is the lunch by and for the PyLadies. A gathering of women who love to code in Python. If you consider yourself a PyLady do attend it, talk about your local PyLadies chapter, your hardels and success. You never know one of your personal story might inspire another PyLady to grow and face her own struggle. You will find similar minded people over there. And never miss to give shout out for the PyLady who in anyway had inspired you. You may always take names instead of a name. If you are there please raise a toast on my behalf for Naomi, Lorena, Ewa, Carol, Betsy,Katie, Lynn,Jackie and yourself too :). So register now.

No photo please

PyCon gives you the space/right to be anonymous, not to be clicked. If you do not want to be clicked please get out of the frame and convey your wish. You can also ask the person to delete the photo which mistakenly has you.

The pronoun you prefer

While registering take the 'The pronoun I prefer' badge.

PyLadies Auction

Saturday night is the PyLadies auction. Be a part of this fun fair (with a good cause). Read about it here.

Quite room

If you want to work, want to be left alone, need some space in the gathering of 3500 people find the quite room.

First time speaker?

Are you speaking for the first time in PyCon? Nervous? Do not want to leave any room for mistake? Want to rehears your talk? There is a speakers room to practice. Another easy way to rehears is, grab someone (whose opinion you value) and give the talk to her. This will give you a proper third eye view and tentative response of the audience. Last year Naomi helped me to do this. She sat with me for hours and corrected me. Never had a chance to say, “Thank you Naomi”.

Poster Presentation and Open Spaces

Do not forget to visit Poster Presentation and Open Spaces and know what is happening at the current Python world.

Code of Conduct

PyCon as [Peter] says is the “holy gathering of the Python tribe”, we all are the part of a diverse community, please respect that. Follow the Code of Conduct. This is the rule book and at all time, you have to abide by. If you have any issue please do not hesitate to contact the staff. Be rest assured that they will take required measures. Lastly do not hold yourself back form saying “sorry” and “thank you”. These two magically words can solve many problems.

One thing for sure your life after these 3days will be completely different. You will be back wealthy with knowledge, lovely memory and friends.

friendsatpycon2

PS: A huge shout out for the PyCon staff for working relentlessly over the year to put up this great PyCon to you. And thank you for coming a =nd attending PyCon and making it a great event it is.

by Anwesha Das at May 11, 2018 04:17 PM

April 04, 2018

Saptak Sengupta

What's the preferred unit in CSS for responsive design?

Obviously, it's %. What a stupid question? Well, you can sometimes use em maybe. Or, maybe vh/vw.  Well, anything except for px for sure.

Sadly, the answer isn't as simple as that. In fact, there is no proper answer to that question and depends a lot on the design decisions taken rather than a fixed rule of making websites responsive. Funny thing is, many times you might actually want to use px instead of % because the later is going to mess things up. In this blog, I will try to describe some scenario's when each of the units work better in a responsive environment.

PS: All of this is mostly my personal opinion and working experience. Not to be mistaken as a rule book.


Where to use %?

Basically, anywhere you have certain kind of layout or grid involved. I use % whenever I feel that the screen needs to be divided into proportions rather than fixed sizes. Let's say I have a side navigations area and the remaining body in a website. I would use % in this case to measure the margins and the area of distribution. Since I definitely want to vary them on changing screen. Or a grid of rows and columns. I will want the width of the grid to be in percentage so that the number of columns allowed change with the width of the screen.

Another use of percentage is while determining the margin of an element. You might want to have much more margin on a wider screen than in a smaller screen. Hence, often the margin-left and margin-right are advisable to be in percentage.

Also, for fonts and typography elements, you should use % since px isn't compatible with W3C standards of accessibility.

PS: A lot many times, it is preferable to use flexbox and grid layout instead of trying to layout things via margins and floats.

Where to use px?

To be honest, yes, it is better to avoid px when you think of making things fluid in responsive. But having said that, there are some cases where even in a fluid design, you want things to have a fixed value. One of the most commonly used examples is top navigation height. You don't want to change the height of the top navigation bar's height with the change in screen size. You might want the width to change or show a hamburger button instead of showing the list of hyperlinks, but you most often want to keep the height of the navigation bar fixed. You can either do it by setting height attribute of CSS or maybe you want to do it with padding, but the unit mostly should be px.

Another use would be for margin of an element but this time the top and bottom margins instead of left and right. Mostly, when you have your website divided into sections, you would want the margin between the different sections to be of a fixed value and not to change with the screen width.

Where to use em?

em is mainly to be used while setting font-sizes and typography elements. By that I mean wherever I have a text involved it is often good to use em. The size of em basically depends on the font-size of the parent element. So if the parent element has 100% font-size (default browser font-size), then 1 em = 16px. But em is a compounded measure. So the more and more nested elements you have the idea or measure of em keeps changing. So it can often be very tricky to work with em. But this is also a feature that might sometimes help to get a compounded font size.

Where to use rem?

The main difference between em and rem is rem depends on the font-size of the root element of a website(basically root-em) or the <html> element. So whatever be the font-size of the root element, rem is always computed based on that and hence unlike em, the computed pixel value is more uniform throughout the website. So rem and em usability depend highly on the use case.

Where to use vh or vw?

vh and vw stand for viewport height and viewport width. The advantage of vh or vw over width is that you can control based on both the height and the width of the media screen that appears in the viewport. 1vh basically means 1% of viewport height. So if you want something to take the entire height of the screen, you would use 100vh. This has applications in both setting width and also setting font-sizes. You can mention the font-size of a typography will either change based on the height or on the width of the viewport instead of the font-size of your parent element. Similarly, you can also set line height based on viewport height instead of the parent element measures.

Where to use media queries?

Even after using all of the above strategically, you will almost necessarily need to use media queries. Media queries are a more definitive set of CSS codes based on the actual media screen width of the device. Mostly, we use it in a conditional way similar to other programming languages. So we can mention if media screen width less than 720px, make this 10% wide, else make it 25% wide. Well, but why do we need it? The main reason for this is the aspect ratio of screens. In a desktop, the width of the screen is much more than the height and hence 25% wide might not occupy a whole lot of screen. However, in a mobile screen where the width is much smaller than the height, 25% might actually occupy more area than you want it to. Hence media queries are needed so that when there is a transition from wide screens to narrow screens, even the percentage widths are changed.


As far as I feel, there are use-cases and scenarios where each of them might be useful. Yes, px is the least used unit if you are concerned about responsiveness, but there will definitely be some elements on your website to which you want to give a fixed width or height. All the other measures are changing, but the way they change is different from each other and hence depends a lot on the designer and the frontend. Also, CSS4 has added a lot of new features which at least make handling of layout lot easier than before.

by SaptakS (noreply@blogger.com) at April 04, 2018 11:47 AM

April 02, 2018

Saptak Sengupta

FOSSASIA 2018: Conference Report


FOSSASIA 2018 was my 2nd FOSSASIA conference and this year it was at a new venue (Lifelong Learning Institute), and for a longer time. As always, there were a lot of speakers and a lot of really exciting sessions. Last year I was a little confused, so this year I planned earlier which all talks to attend and what all to do.

22nd March (1st Day)

The opening ceremony was kicked off by Harish Pillay and Damini Satya, both of whom did an incredible job on hosting the entire day. The opening ceremony was followed by the keynote talks and a panel discussion which lent a great insight into how open source, AI, blockchain and all other modern day technologies are working hand in hand with each other. Harish Pillay also shared his view how he thinks AI won't take over the human beings but rather human beings will evolve to become something which is a combination of human beings and AI and hopefully have a good future. I do agree with him to some extent.

Hong addressed the audience stating the primary focus of FOSSASIA in the next few years and how it involves helping more developers get involved in open source and making new cool things. Codeheat winners were awarded next for their wonderful contributions in different FOSSASIA projects. The mentors of the projects were also honored with medals, which was kind of something I wasn't expecting. Then, it was time for the track overviews to help people understand and know what the different tracks were all about. We told what the tracks were and why the audience should be interested. With that, it was time for the most important track - The Hallway Track. So people talked and networked in the exhibition area for the rest of the day.

23rd March (2nd Day)

I was the moderator of the Google Training Day and also the cloud track in one of the rooms. Which meant getting up early, and reaching there on time. Fortunately, I made it on time (I still don't know how). Being the moderator, I was there almost the entire day. Which meant a lot of Google Cloud learning for me. So the talks ranged from using BigQuery to handle queries in big data to using Cloud ML to do Machine Learning Stuff. The Google Training Day talks were followed by a talk on serverless computing and tutorial on kubernetes. After that, it was again time to hang out in the exhibition area and talk with people.

24th March (3rd Day)

Today was the day of my talk. I was pretty worried the night before whether I would be able to make it to my own talk since it was at 9.30 in the morning. I did make it to my talk. But what was more surprising was, there were actually more people than I expected at 9.30 in the morning which was great. Apart from few technical glitches in the middle of my talk, everything went pretty smoothly. I talked about how we at Open Event decoupled the architecture to have a separate backend and frontend now and how it's really helpful for development and maintenance. I also gave a brief overview of the various architectures involved and the code and file structures.

After finishing my talk, I attended the SELinux talk by Jason Zaman. SELinux is a very confusing and mystified topic for most people and there was no way I was missing this talk. He gave a hands-on about setting up SELinux policy and how to use audit logs. Next was the all-women panel about open source and tech. After this was the necessary group photo where the number of participants made it a little too difficult for the photographer.

The remaining of the day was pretty involving where I mentored in the UNESCO hackathon, helped with video recording and so on.

25th March (4th Day)

The final day of the event. I was really interested in attending the talk about Open Source Design by Victoria and hence reached the venue by 10 am in the morning. It was a great insight as to how Open Source Design is involving and bringing in more and more designers into open-source which is really great. The last session I was eagerly waiting for was the GPG/PGP key signing event. Had a lot of fun helping people create their first GPG/PGP keys and signing. Met and interacted with some really awesome people there.



At last, it was time for the conference closing ceremony. But it wasn't over yet. We all met over at hackerspace where I had some great discussions with people about the different projects I work on and was really great to have their views.

All in all, it was really great meeting old friends, making new friends and meeting people whom I actually knew only by their nick. More than the talks in itself what makes a great conference is the people in it and the chance to meet them once in a year. At least that's how I see them. And FOSSASIA 2018 met that purpose wonderfully.

by SaptakS (noreply@blogger.com) at April 02, 2018 05:14 AM