Planet dgplug

September 17, 2019

Kushal Das

Permanent Record: the life of Edward Snowden

book cover

The personal life and thinking of the ordinary person who did an extraordinary thing.

A fantastic personal narrative of his life and thinking process. The book does not get into technical details, but, it will make sure that people relate to the different events mentioned in the book. It tells the story of a person who is born into the system and grew up to become part of the system, and then learns to question the same system.

I bought the book at midnight on Kindle (I also ordered the physical copies), slept for 3 hours in between and finished it off in the morning. Anyone born in 80s will find so many similarities as an 80s kid. Let it be the Commodore 64 as the first computer we saw or basic as the first-ever programming language to try. The lucky ones also got Internet access and learned to roam around of their own and build their adventure along with the busy telephone lines (which many times made the family members unhappy).

If you are someone from the technology community, I don't think you will find Ed's life was not as much different than yours. It has a different scenario and different key players, but, you will be able to match the progress in life like many other tech workers like ourselves.

Maybe you are reading the book just to learn what happened, or maybe you want to know why. But, I hope this book will help to think about the decisions you make in your life and how that affects the rest of the world. Let it be a group picture posted on Facebook or writing the next new tool for the intelligence community.

Go ahead and read the book, and when you are finished, make sure you pass it across to your friend, or buy them new copies. If you have some free time, you may consider to run a Tor relay or a bridge, a simple step will help many around the world.

On a side note, the book mentions SecureDrop project at the very end, and today is also the release of SecureDrop 1.0.0 (the same day of the book release).

September 17, 2019 04:44 AM

September 16, 2019

Jason Braganza (Work)

Peter Kaufman on The Multidisciplinary Approach to Thinking

Peter Kaufman, editor of Poor Charlie’s Almanack, on why is it important to be a multidisciplinary thinker.

Because as the Japanese proverb says, ‘The frog in the well, knows nothing of the mighty ocean.’
You may know everything there is to know about your specialty, your silo, your “well”, but how are you going to make any good decisions in life …
the complex systems of life, the dynamic system of life …
if all you know, is one well?

He then, talks about a sneaky shortcut on how he did it.

So I tried to learn what Munger calls, ‘the big ideas’ from all the different disciplines.
Right up front I want to tell you what my trick was, because if you try to do it the way he did it, you don’t have enough time in your life to do it. It’s impossible. Because the fields are too big and the books are too thick. So my trick to learn the big ideas of science, biology, etc., was I found this science magazine called Discover Magazine. […]
I found that this magazine every month had a really good interview with somebody from some aspect of science. Every month. And it was six or seven pages long. It was all in layperson’s terms. The person who was trying to get their ideas across would do so using good stories, clear language, and they would never fail to get all their big ideas into the interview. […]
So I discovered that on the Internet there were 12 years of Discover Magazine articles available in the archives. So I printed out 12 years times 12 months of these interviews. I had 144 of these interviews. And I put them in these big three ring binders. Filled up three big binders.
And for the next six months I went to the coffee shop for an hour or two every morning and I read these. And I read them index fund style, which means I read them all. I didn’t pick and choose. This is the universe and I’m going to own the whole universe. I read every single one.
Now I will tell you that out of 144 articles, if I’d have been selecting my reading material, I probably would have read about 14 of them. And the other 130? I would never in a million years read six pages on nanoparticles.
Guess what I had at the end of six months? I had inside my head every single big idea from every single domain of science and biology. It only took me 6 months. And it wasn’t that hard because it was written in layperson’s terms.
And really, what did I really get? Just like an index fund, I captured all the parabolic ideas that no one else has. And why doesn’t anybody else have these ideas? Because who in the world would read an interview on nanoparticles? And yet that’s where I got my best ideas. I would read some arcane subject and, oh my god, I saw, ‘That’s exactly how this works over here in biology.’ or ‘That’s exactly how this works over here in human nature.’ You have to know all these big ideas.

And then in an extraordinary step of generous giving, he spends the rest of the talk, summing all he has learnt into the next 40 or so minutes.

You should go read the talk at Latticework Investing.

Even better, you should go listen. Kaufman is a really engaging speaker.

I hope you listen to this, every once in a while like I do.

Shane Parrish also merges Peter’s ideas with the Durants for an amazing post on the lessons of history.

P.S. Subscribe to my mailing list!
P.P.S. Feed my insatiable reading habit.


by Mario Jason Braganza at September 16, 2019 12:15 AM

September 14, 2019

Kuntal Majumder

Duck typing

Definition: Duck typing in computer programming is an application of the duck test—”If it walks like a duck and it quacks like a duck, then it must be a duck”—to determine if an object can be used for a particular purpose.

September 14, 2019 09:50 AM

September 11, 2019

Priyanka Saggu

Some pending logs!

September 11, 2019

It’s been a very long time since I wrote here for the last.

The reason is nothing big but mainly because:

  1. Apparently, I was not able to finish some tasks in time that I used to write about.
  2. I was not well for a long time that could be an another reason .
  3. Besides, life happened in many ways which ultimately left me working on some other things first, because they seemed to be *important* for the time.

And, yes, there is no denying the fact that I was procastinating too because writing seems to be really hard at most times.

Though I had worked on many things throughout the time and I’ll try to write them here as short and quick logs below.



This one question always came up, many times, the students managed to destroy their systems by doing random things. rm -rf is always one of the various commands in this regard.

Kushal Das
  • While I was doing the above task, at one time I ruined my local system’s mail server configs and actually ended up doing something which kushal writes about in one of his recent post (quoted above). I was using the command rm -rf to clean some of the left-over dependencies of some mail packages, but that eventually resulted into machine being crashed. It was not the end of the mess this time. I made an another extremely big mistake meanwhile. I was trying to back up the crashed system, into an external hard disk using dd. But because I had never used dd before, so again I did something wrong and this time, I ended up losing ~500 GBs of backed up data. This is “the biggest mistake” and “the biggest lesson” I have learnt so far. 😔😭 (now I know why one should have multiple backups) And as there was absolutely no way of getting that much data back, the last thing I did was, formatting the hard-disk into 2 partitions, one with ext4 file system for linux backup and the other one as ntfs for everything else.

Thank you so much jasonbraganza for all the help and extremely useful suggestions during the time. 🙂


  • Okay, now after all the hassle bustle above, I got something really nice. This time, I received the “Raspberry Pi 4, 4GB, Complete Kit ” from kushal.

Thank you very much kushal for the RPi and an another huge thanks for providing me with all the guidance and support that made me reach to even what I am today. 🙂


  • During the same time, I attended a dgplug guest session from utkarsh2102. This session gave me a “really” good beginner’s insight of how things actually work in Debian Project. I owe a big thanks to utkarsh2102 as well, for he so nicely voluteered me from there onwards, to actually start with Debian project. I have started with DPMT and have done packaging 4 python modules so far. And now, I am looking forward to start contributing to Debian Ruby Team as well.

  • With the start of september, I spent some time solving some basic Python problems from kushal’s lymworkbook. Those issues were related to some really simply sys-admins work. But for me, working around and wrapping them in Python was a whole lot of learning. I hope I will continue to solve some more problems/issues from the lab.

  • And lastly (and currently), I am back to reading and implementing concepts from Ops School curriculum.

Voila, finally, I finish compiling up the logs from some last 20 days of work and other stuffs. (and thus, I am eventually finishing my long pending task of writing this post here as well).

I will definitely try to be more consistent with my writing from now onwards. 🙂

That’s all for now. o/

by priyankasaggu119 at September 11, 2019 05:28 PM

September 10, 2019

Kushal Das

Exciting few weeks in the SecureDrop land

Eric Trump tweet

Last week there was an interesting tweet from Eric Trump, son of US President Donald Trump. Where he points out how Mr. David Fahrenthold, a journalist from Washington Post did some old school journalism and made sure that every Trump organization employee knows about how to securely leak information or talk to a journalist via SecureDrop.

I want to say thank you to him for this excellent advertisement for our work. There were many people over Twitter, cheering him for this tweet.

julian and matt's tweet Parker's tweet Harlo's tweet

If you don’t know what SecureDrop is, it is an open-source whistleblower submission system that media organizations and NGOs can install to securely accept documents from anonymous sources. It was originally created by the late Aaron Swartz and is now managed by Freedom of the Press Foundation. It is mostly written in Python and uses a lot of Ansible. Jennifer Helsby, the lead developer of SecureDrop and I took part in this week’s Python podcast along with our host Tobias. You can listen to learn about many upcoming features and plans.

If you are interested to contribute to the SecureDrop project, come over to our gitter channel and say hello.

defcon

Last month, during Defcon 27, there was a panel about DEF CON to help hackers anonymously submit bugs to the government, interestingly the major suggestion in that panel is to use SecureDrop (hosted by Defcon) so that the researchers can safely submit vulnerabilities to the US government. Watch the full panel discussion to learn more in details.

September 10, 2019 04:52 AM

Jason Braganza (Personal)

8


Ubi enim est thesaurus tuus, ibi est et cor tuum.

For where thy treasure is, there is thy heart also.

You’ve stood by me through thick and thin.
We’ve been through houses and hospitals and travails and travels around the world.
We finish each other’s thoughts and sentences, (much to Poo’s chagrin,)
I don’t know what I’d do without you.

To quote a silly old country song,

When my life is through,
And the Angels ask me to recall
The thrill of it all, then I will tell them
I remember you …

I love you,
I do,
more than I can tell you,
more than I ever did, eight years ago.


by Mario Jason Braganza at September 10, 2019 12:15 AM

September 09, 2019

Jason Braganza (Personal)

Phenomenal Woman


Now you understand
Just why my head’s not bowed.
I don’t shout or jump about
Or have to talk real loud.
When you see me passing,
It ought to make you proud.
I say,
It’s in the click of my heels,
The bend of my hair,
the palm of my hand,
The need for my care.
’Cause I’m a woman
Phenomenally.
Phenomenal woman,
That’s me.

Maya Angelou

P.S. Subscribe to my mailing list!
P.P.S. Feed my insatiable reading habit.


by Mario Jason Braganza at September 09, 2019 12:15 AM

September 08, 2019

Kuntal Majumder

Finding the Edge

Three months of fighting with boost, qt, having a proper plan, multiple individuals to get help from and still unable to hit the target in time. That will be software engineering 101 for me.

September 08, 2019 11:42 AM

September 02, 2019

Jason Braganza (Work)

Notes from Jocelyn K Glei’s Podcast Episode on Creativity & Efficiency

My dad was a carpenter.
Well everyone called him that, but I know him for what he truly was.
A craftsman.
Be it his work with wood, or the little works of art and craft he made for us or his drawings in my book; everything he did, was slow, and measured, and full of deliberation and intention.

Which is why this episode struck such a chord with me.
Jocelyn articulates beautifully, exactly what my father did.
I still remember his slight rankle, followed by this expression of sorrow, whenever I would rush him, tell him this much was good enough.
Thank God, he never listened to me.
He may not be here now, but everything he built, makes it like he is.

It’s a short delightful episode. Go listen.
Definitely worth your time and attention.
Everything below the break are my paraphrased notes.


Creativity and Efficiency, have absolutely nothing to do with each other.
The creative process actively resists efficiency.

Creativity is messy and organic and full of (as it looks from the outside) friction, which is a little bit frustrating, because everything else in our lives keeps getting faster, easier, smoother, more efficient, more frictionless.

We have become more accustomed to a kind of effortless convenience.
Ask and ye shall receive.

So there’s a really interesting tension here.
Between the pace of technology and the pace of creativity.

Work is what we do by the hour. It begins and, if possible, we do it for money. Welding car bodies on an assembly line is work; washing dishes, computing taxes, walking the rounds in a psychiatric ward, picking asparagus–these are work.

Labor, on the other hand, sets its own pace. We may get paid for it, but it’s harder to quantify… Writing a poem, raising a child, developing a new calculus, resolving a neurosis, invention in all forms — these are labors.

Work is an intended activity that is accomplished through the will. A labor can be intended but only to the extent of doing the groundwork, or of not doing things that would clearly prevent the labor. Beyond that, labor has its own schedule.

[…]

There is no technology, no time-saving device that can alter the rhythms of creative labor. When the worth of labor is expressed in terms of exchange value, therefore, creativity is automatically devalued every time there is an advance in the technology of work.

The Gift, Lewis Hyde

As technology makes everything more efficient, we tend to think that creativity should also become more efficient, that there must be a way to do creative work, that’s better, faster, more scalable … but is there?

What’s more important?
Doing all the things?
Or enjoying all the things that you’re doing?

Creativity resists efficiency.
No one can tell you how much time something should take, because creativity is not measurable on a time clock.
It’s not practical or efficient or objectively quantifiable.
What it is, is deeply personal.

No one knows, how long it takes to make anything.
Which means, no one knows what pace your creative process should unfold at, except for you.
And no one knows, what boundaries you need to setup to protect that process, but you.
And no one knows, how much you should obsess about the details, or how far you should go, and when you should say, “This is enough!”‚ but you.

Remarkable creative projects don’t come from efficiency.
If anything, they come from inefficiency.
From doggedly ignoring all the rules, and saying,

“I am going to devote an ungodly amount of time to this thing, that no one else thinks is important, but that I think is important.”

Great creative work comes from slowing down, when every one else is rushing around and saying,

“I’m going to take my time and notice this thing, that everyone else is missing and really sit with it, and contemplate it and craft it to create something remarkable.
Actually, something that’s even more remarkable, because no one else would have taken the time.”

So the next time you feel stuck or rushed or judged for your “inefficiencies”, remember that they’re also your strength.
Because greater comes from working at your own pace.
Remember to take your time.

P.S. Also, remember to subscribe to the mailing list, if you haven’t already! :)


by Mario Jason Braganza at September 02, 2019 12:15 AM

August 31, 2019

Armageddon

The story behind cmw

A few days ago, Kushal Das shared a curl command.

The command was as follows:

$ curl https://wttr.in/

I, obviously, was curious. I ran it and it was interesting. So it returns the weather right ? Pretty cool huh!

Read more… (3 min remaining to read)

by Elia El Lazkani at August 31, 2019 04:00 AM

August 29, 2019

Sayan Chowdhury

Why I prefer SSH for Git?

Why I prefer SSH for Git?

In my last blog, I quoted

I'm an advocate of using SSH authentication and connecting to services like Github, Gitlab, and many others.

On this, I received a bunch of messages over IRC asking why do I prefer SSH for Git over HTTPS.


I find the Github documentation quite helpful when it comes down to learning the basic operation of using Git and Github. So, what has Github to say about "SSH v/s HTTPS"?

Github earlier used to recommend using SSH, but they later changed it to HTTPS. The reason for the Github's current recommendation could be:

  • Ease to start with: HTTPS is very easy to start with, as you don't have to set up your SSH keys separately. Once the account is created, you can just go over and start working with repositories. Though, the first issue that you hit is that you need to enter your username/password for every operation that you need to perform with git. This can be overcome by caching or storing the password using Git's credential storage. If you cache, then it is cached in memory for a limited period after which it is flushed so you need to enter your credentials again. I would not advise storing the password, as it is stored as plain-text on disk.
  • Easily accessible: HTTPS in comparison to SSH is easily accessible. Why? You may ask. The reason is a lot of times SSH ports are blocked behind a firewall and the only option left for you might be HTTPS. This is a very common scenario I've seen in the Indian colleges and a few IT companies.

Why do I recommend SSH-way?

SSH keys provide Github with a way to trust a computer. For every machine that I have, I maintain a separate set of keys. I upload the public keys to Github or whichever Git-forge I'm using. I also maintain a separate set of keys for the websites. So, for example, if I have 2 machines and I use Github and Pagure then I end up maintaining 4 keys. This is like a 1-to-1 connection of the website and the machine.

SSH is secure until you end up losing your private key. If you do end up losing your key, even then you can just login using your username/password and delete the particular key from Github. I agree, that the attacker can do nasty things but that would be limited to repositories and you would have control of your account to quickly mitigate the problem.

On the other side, if you end up losing your Github username/password to an attacker, you lose everything.

I also once benefitted from using SSH with Github, but IMO, exposing that also exposes a vulnerability so I'll just keep it a secret :)

Also, if you are on a network that has SSH blocked, you can always tunnel it over HTTPS.

But, above all, do use 2-factor authentication that Github provides. It's an extra layer of security to your account.

If you have other thoughts on the topic, do let me know over twitter @yudocaa, or drop me an email.


Photo by Christian Wiediger on Unsplash

by Sayan Chowdhury at August 29, 2019 11:55 AM

August 26, 2019

Saptak Sengupta

Configuring Jest with React and Babel

Jest is a really good frontend testing framework and works great with React and Babel out of the box, along with Enzyme for component testing. But, imports with React and Babel can often be filled with nasty imports. I wrote in a previous blog about how to make better more cleaner imports using some webpack tweaks.

But the problem appears when we try to write Jest and Enzyme tests with them. Because Babel can now longer understand and parse the imports. And without Babel parsing them and converting to ES5, jest cannot test the components. So we actually need a mix of Babel configuration and Jest configuration.

Note: This post assumes you already have jest, babel-jest and babel/plugin-transform-modules-commonjs packages installed using your favorite javascript package manager.

Basically, the workaround is first we need to resolve the cleaner imports into the absolute paths for the import using Jest configurations, and then use the Babel configurations to parse the rest code (without the modules) to ES5.

The configuration files look something like these:

babel.config.js

module.exports = api => {
const isTest = api.env('test');
if (isTest) {
return {
presets: [
[
'@babel/preset-env',
{
modules: false,
},
],
'@babel/preset-react',
],
plugins: [
"@babel/plugin-transform-modules-commonjs",
],
}
} else {
return {
presets: [
[
'@babel/preset-env',
{
modules: false,
},
],
'@babel/preset-react',
],
}
}
};


jest.config.js

module.exports = {
moduleNameMapper: {
'^~/(.*)$': '<rootDir>/path/to/jsRoot/$1'
}
}

So let's go through the code a little.

In babel.config.js, we make a check to see if the code is right now in test environment. This is helpful because
  1. Jest sets the environment to "test" when running a test so it is easily identifiable
  2. It ensures that the test configuration don't mess up with the non test configurations in our Webpack config (or any other configuration you are using)
So in this case, I am returning the exact same Babel configuration that I need in my Webpack config in non-test environment.

In the test configuration for Babel, we are using a plugin "@babel/plugin-transform-modules-commonjs". This is needed to parse all the non component imports like React, etc. along with parsing the components from ES6 to ES5 after jest does the path resolution. So it helps to convert the modules from ES6 to ES5.

Now, let's see the jest.config.js. The jest configuration allows us to do something called moduleNameMapper. This is a very useful configuration in many different usecases. It basically allows us to convert the module names or paths we use for module import to something that jest understands (or in our case, something that the Babel plugin can parse). 

So, the left hand part of the attribute contains a regular expression which matches the pattern we are using for imports. Since our imports look something like '~/path/from/jsRoot/Component', so the regular expression to capture all such imports is '^~/(.*)$'. Now, to convert them to absolute paths, we need to append '<rootDir>/path/to/jsRoot/' in front of the component path.

And, voila! That should allow Jest to properly parse, convert to ES5 and then test.

The best part? We can use the cleaner imports even in the .test.js files and this configuration will work perfectly with that too.

by SaptakS (noreply@blogger.com) at August 26, 2019 06:16 AM

Making cleaner imports with Webpack and Babel

You can bring in modules from different javascript file using require based javascript code, or normal Babel parse-able imports. But the code with these imports often become a little bad because of relative imports like:

import Component from '../../path/to/Component'

But a better, more cleaner way of writing ES6 imports is

import Component from '~/path/from/jsRoot/Component'

This hugely avoids the bad relative paths  for importing depending on where the component files are. Now, this is not parse-able by babel itself. But you can parse this by webpack itself using it's resolve attribute. So your webpack should have these two segments of code:

resolve: {
        alias: {
            '~': __dirname + '/path/to/jsRoot',
            modernizr$: path.resolve(__dirname, '.modernizrrc')
        },
        extensions: ['.js', '.jsx'],
        modules: ['node_modules']
    },

and

module: {
        rules: [
            {
                test: /\.jsx?$/,
                use: [
                    {
                        loader: 'babel-loader',
                        query: {
                            presets: [
                                '@babel/preset-react',
                                ['@babel/preset-env', { modules: false }]
                            ],
                        },
                    }
                ],
            },
}

The {modules: false} ensures that babel-preset-env doesn't handle the parsing of the module imports. You can check the following comment in a webpack issue to know more about this.

by SaptakS (noreply@blogger.com) at August 26, 2019 05:56 AM

August 24, 2019

Abhilash Raj

Don't write more Dockerfiles

With containers comes container images. Yes, you don't necessarily have to, but it is nicer to isolate te filesystem too so that one can fix the packaging problem of an application and laugh at the developers of dynamic linked libraries.

The most popular and the only sane way to create container images today is Dockerfile. There are actually tons of tools which can build container image today, but all of them use Dockerfile format as the input. There are other options like buildah which has a custom format. Then, there is also packer which allows you to wrtite what they call, wait for it, a packerfile. Although, the great thing about packerfile is that is much more declarative. I love that stuff, although, packer was built to create VM images, so they lump all the changes you want in a single container layer, which is not great since layers help not just with build speeds, but also deployment speeds since the older layers are cached.

Anyway, this blog post is mostly going to try to convince you that Dockerfiles are not the best thing possible to create container images. I am okay with it being an input for whatever tool that builds images, but it shouldn't be something that I have to write. It is essentally a shell script and makes it terribly difficult to do so so many things.

So, first lets see what is the difference between a declaratve and imperative language. These terms are used more often in the context of programming languages. I am not going to go too much into the details of what is the difference between each of them, but let me explain with what would I like a declarative build script for a contianer image should look like:

FROM ubuntu:18.10

ENV LANG=en_US DEBIAN_FRONTEND=noninteractive

RUN apt update && \
    apt install --yes python3-mysql mariadb-server && \
    apt cache clean
    
RUN useradd -S mailman

USER mailman

SHELL /bin/bash

CMD ["mailman", "start"]

This is a imperative script, you write exactly how things should work and exactly how the image would look like. You said that you want apt related changes in one single layer, and the useradd command in the 2nd layer. Then you said what environment variables you want. You write the precise commands and flags with apt install command, like --yes for the non-interactive flag, DEBIAN_FRONTEND environment variable to signal apt that it is not running an interactive session.

This is great, however, that is a lot of duplication that is going to happen across 10s or 100s of container image that one might maintain. Be it a company or an individual. You can do every possible things today with bash scripts and unix tools, but programming languages were invented so that you could do that in a more maintinaible way and possibly avoid duplication.

What is the information that you want to create a container image? Well, most of the declarations in a Dockerfile are pretty declarative, like FROM, ENV, USER, SHELL, CMD, PORT and several others that you can find from the official Dockerfile reference. The only non-declarative option is RUN IMO.

RUN basically allows arbitrary commands, which users execute to setup an image. It is great to have a power like this, but not many people need something like this, not for all the usual tasks like installing packages, chowing files, adding users, setting up configurations (using postconf -e for example, for setting up postfix), adding pre-initialization steps in the ENTRYPOINT etc.

We don't really need to reinvent the entire wheel to solve all the above problems. There are tons of other tools which have solved this problem, they are called configuration management tools. Chef, Puppet, Ansible. I understand that their model is structured around managing runtime systems and enforcing policies at runtime (Ansible is slightly different, it can be used for one time config and runtime enforcement, maybe others too. I am not much familiar with Chef and Puppet personally).

But there does seem to be a need for a more special case config management like tool which can emit a Dockerfile as an output from configuration file with as less Shell scripts and commands as possible. It is a design decision on how much control you really wat to give your users, not all programmers want the flexibility and agility in the system. Sometimes, people are happy to have some decisins made on their behalf with the best practices so that they can focus on their work. Unless your work is creating container images, you shouldn't have to worry about layers, base images, efficiency to build and download images etc.

[my-image]
base = "ubuntu:18.10"
install = [ "python3-dev", "mariadb"]
user = "mailman"
command = "mailman start"

So, this is what I expect a simple declarative format for a container image declaration should look like. Note that it is not something novel that I have come up by any means, it is more of less exactly like a packerfile. But what really is the benefit of using something like this?

  • First, just by looking at the base I can easily infer a couple of things, it is an apt based system so I know how to install the packages from install section, I can clean up the caches for apt based system (we can do the same for dnf or pacman based systems too).
  • Looking at the install sections I know what the final derived license of my container images can be, and I can optimize the container image as much as I like by deleting every single database file for apt or dnf.
  • user directive will let me know what to run the final command as, I don't necessarily need to use the USER directive of the Dockerfile since there are setup scripts ENTRYPOINT  that need root so you can just use something like su-exec and render an entrypoint script yourself.

There are more things that you can do in an automated fasion using a tool to remove the duplicated stuff in Dockerfile. You can go a step beyond as well.

One idea that I had was to manage system dependencies for applications. You can even give up the control of the install , base, user declaratives and do something like:

[my-image]
wants = [ "libffi.so.1", "libopenssl.so.1", "libpython.so.3.5", "mysqld"]
command = "mailman start"

Wants is an even more declarative way to define what precise depdencies you have, like you may want MySQL, commonly known binary is mysqldand it comes from different packages in different operating systems. This wishful tool can figure out which library is provided from which package in apt and dnf systems.

You can also just figure out which other base images you already have in your conianer registry that you can re-use to not have to build an image at-all.

This tool can go even a step ahead to bridge the gap between "cloud native system"and "traditional applications". Tons of people write more shell scripts to just use environment variables to be rendered in configuration files which is expected by traditional applications.

None of these scipts actually setup any applications because there are myriads of ways to setup applications that there is no way generalize. This more around the management of system dependencies and operating system, which is actually a hard process.

I am not sure if I have convinved anyone that this can be something useful, but I really wish something like this existed. I would certainly use something like this. This being a TOML file, I can even put multiple applications in a single file and have a look at it together, copying out common parts into an image right on the top and deriving the rest of the images from it.

[base-image]
base = "ubuntu:18.10"
install = ["openssl", "libffi-dev", "libtorrent-devel", "mysql"]
user = "mailman"

[container-image-1]
# reference to the image above.
base = "ref:base-image"
command = "mailman start"

[container-image-2]
base = "ref:base-image"
command = "mysqld"

This generates two images, from the same image. But with different commands. I know it is not the best example, but my intention was to only show what is possible. And then users will come up with innovative ways to do other things with it :)

by Abhilash Raj at August 24, 2019 09:06 PM

August 22, 2019

Robin Schubert

Book review: Shakthi Kannan - I want to do project. Tell me wat to do.

Als ich mit meinem Master in Physik von der Uni kam, hatte ich ein wenig Ahnung von Physik, vom Programmieren und davon wie ich eine Masterarbeit schreibe. Davon, wie ich mich in einem Team von Entwicklern zu verhalten habe, wie ich auf Mailing-Listen schreibe oder IRC Channels und andere Medien effektiv nutze um weiterzukommen, hatte ich aber keine Ahnung - allerdings wollte ich einen Beitrag leisten zur FOSS community, und dies sind genau die Dinge, die Open Source Entwickler alltaeglich tun.

Aber auch das wusste ich nicht. Ich habe schon frueher ueber meine Erfahrungen im #dgplug summertraining geschrieben, wo ich unter anderem zum ersten Mal von richtiger Kommunikation, Coding Style und Organisation in FOSS projekten gehoert habe. Ich konnte zwar programmieren, aber ich wusste nicht wie ich es anstellen musste damit auch andere davon profitieren konnten.

Shakthi Kannan, der Author dieses Buches, ist einer der Mentoren im dgplug summertraining, von dem ich viel gelernt habe.

Die ungeschriebenen Gesetze von Open Source

Dieses Buch wird niemandem das Programmieren beibringen, aber es zeigt wie programmiert werden muss um auch fuer andere lesbaren und wartwaren Code zu produzieren, um einen Teil der Free and Open Source Community werden koennen, die einen Grossteil unseres heutigen IT bestimmt.

Es ist voll von guten Angewohnheiten und Styleguides die man sich zu Herzen nehmen sollte, von Verhaltensregeln und von Methoden kontinuierlich und bestaendig Gute Beitraege zu leisten um Frustration end Entmutigung zu vermeiden.

Schlechte Beispiele sind die besten Beispiele

Waehrend nur wenige Uni-Absolventen die beschriebenen Regeln und Ablaeufe kennt, sind es doch die absoluten Grundlagen wenn es um FOSS entwicklung geht. Wenn du jemals eine E-Mail an eine Mailing Liste oder eine Frage in einem IRC Channel gestellt hast, und entweder keine oder nur nutzlose Antworten bekommen hast, dann kannst du dir vorstellen worum es geht. Vielleicht hast du dich auch gar nicht erst getraut etwas zu fragen, aus Angst etwas falsch zu machen, oder du wusstest nicht wo du potentielle Hilfe finden konntest.

Das Buch beinhaltet eine Reihe von Beispielen von fehlgeschlagener Kommunikation und typischen Code Formatierungsfehlern etc. mit denen sich vielbeschaeftigte FOSS Entwickler nicht jeden Tag herumschlagen koennen, und gibt Verbesserungsvorschlaege und Tips. Andere Kapitel fuehren dich Schrittweise durch die Ablaeufe um Software Bugs zu reporten, zu reproduzieren, zu fixen und den Fix so vorzubereiten, dass er zum Projekt hinzugefuegt werden kann; Die Standard-Workflows fuer Open Source Software Entwicklung.

Falls du schon immer zu einem FOSS Projekt beitragen wolltest aber nicht recht weisst wo du starten willst, dann lies dieses Buch ;)

by Robin Schubert at August 22, 2019 12:00 AM

August 18, 2019

Priyanka Saggu

The Simplified playbook!

August 18, 2019

Ah, I literally procastinated a lot for writing this blog post. But I, “actually”, am not regretting at all for this once, because the time was justly spent well with my parents & family.

Anyways, before moving forward with other tasks in the series, I am supposed to finish the backlogs (finish writing my 2 blogposts, including this one).

And therefore, let me quickly describe the reason for writing this post.

This post doesn’t actually have a distinct topic rather it’s just an update/improvement to one of my last blogpost “A guide to a “safer” SSH!” . Back there, I was fairly doing every errand by writing separate individual tasks. For instance, when I was supposed to make changes in sshd_config file, I used an approach to find the intended lines using “regex” and replace each one of them individually with the new required configurations. Similar was the case while writing iptables rule through ansible playbook on a remote machine.

But these individual execution of co-related tasks was making the whole ansible implementation/deployment process extremely time-consuming and the ansible playbook itself look unneccesarily lengthy and complex. Thus, the real idea of writing these playbooks to automate stuffs in a faster and easier manner, proved to be pretty much worthless in my case.

So, here I am taking over kushal’s advice of improving these ansible playbooks to achieve simplicity and better optimized execution time. The whole idea is to compile up these co-related tasks (for example, making changes in sshd_config file for the purpose of SSH-hardening) in a single file and copy this file to the intended location/path/directory on the remote node/server.

Let me quickly walk you through some simple hands-on examples to make the idea more precise and understand in action. (We will be improving our existing ansible playbook only)

  • So, earlier in the post, while writing the “ssh role“, our tasks looked something like this:
---
# tasks file for ssh
- name: Add local public key for key-based SSH authentication
  authorized_key:
          user: "{{username}}"
          state: present
          key: "{{ lookup('file', item) }}"
  with_fileglob: public_keys/*.pub
- name: Harden sshd configuration
  lineinfile:    
          dest: /etc/ssh/sshd_config    
          regexp: "{{item.regexp}}"    
          line: "{{item.line}}"
          state: present
  with_items:
    - regexp: "^#?PermitRootLogin"
      line: "PermitRootLogin no"
    - regexp: "^^#?PasswordAuthentication"
      line: "PasswordAuthentication no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
    - regexp: "^#?AllowTcpForwarding"
      line: "AllowTcpForwarding no"
    - regexp: "^#?MaxAuthTries"
      line: "MaxAuthTries 2"
    - regexp: "^#?MaxSessions"
      line: "MaxSessions 2"
    - regexp: "^#?TCPKeepAlive"
      line: "TCPKeepAlive no"
    - regexp: "^#?UseDNS"
      line: "UseDNS no"
    - regexp: "^#?AllowAgentForwarding"
      line: "AllowAgentForwarding no"
- name: Restart sshd
  systemd:
          state: restarted    
          daemon_reload: yes
          name: sshd
...

And if we observe closely at the last second task, we are altering each intended line of the sshd_config file in an individual fashion which is definitely not required. Rather the changes could be made at once, in a new copied file of the existing “sshd_config” file and thus sent to the remote node at the required location/path/directory.

This copied sshd_file will reside in the “files/” directory of our “ssh role“.

├── ssh
│   ├── defaults
│   │   └── main.yml
│   ├── files   👈(HERE)
│   ├── handlers
│   │   └── main.yml
│   ├── meta
│   │   └── main.yml
│   ├── README.md
│   ├── tasks
│   │   └── main.yml
│   ├── templates
│   ├── tests
│   │   ├── inventory
│   │   └── test.yml
│   └── vars
│       └── main.yml
  • Copy the local sshd_config file to this files/ directory.
# like in my case, the ansible playbook is residing at "/etc/ansible/playbooks/"
$ sudo cp /etc/ssh/sshd_config /etc/ansible/playbooks/ssh/files/
  • And then make the required changes in this file as specified in the last second task of our old “ssh role“.
  • Finally modify the “ssh role” by replacing the last second task with the task of copying this file in the remote node at “/etc/ssh/” directory path thus removing the un-neccessary recursive steps.
  • Now, the new “ssh role” would look like the following.
---
# tasks file for ssh
- name: Add local public key for key-based SSH authentication
  authorized_key:
          user: "{{username}}"
          state: present
          key: "{{ lookup('file', item) }}"
  with_fileglob: public_keys/*.pub
- name: Copy the modified sshd_config file to remote node's /etc/ssh/ directory.
  copy:
    src: /etc/ansible/playbooks/ssh/files/sshd_config
    dest: /etc/ssh/sshd_config
    owner: root
    group: root
    mode: 0644
- name: Restart sshd
  systemd:
          state: restarted    
          daemon_reload: yes
          name: sshd
...

And we are done. This will execute considerably much faster than the old ansible role and looks comparatively much simpler as well.


Similar improvements can be made in case of “iptables role” as well.

Our old “iptables role” looked something like this:

---
# tasks file for iptables
- name: Install the `iptables` package
  package:
    name: iptables
    state: latest
- name: Flush existing firewall rules
  iptables:
    flush: true
- name: Firewall rule - allow all loopback traffic
  iptables:
    action: append
    chain: INPUT
    in_interface: lo
    jump: ACCEPT
- name: Firewall rule - allow established connections
  iptables:
    chain: INPUT
    ctstate: ESTABLISHED,RELATED
    jump: ACCEPT
- name: Firewall rule - allow port ping traffic
  iptables:
    chain: INPUT
    jump: ACCEPT
    protocol: icmp
- name: Firewall rule - allow port 22/SSH traffic
  iptables:
    chain: INPUT
    destination_port: 22
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - allow port80/HTTP traffic
  iptables:
    chain: INPUT
    destination_port: 80
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - allow port 443/HTTPS traffic
  iptables:
    chain: INPUT
    destination_port: 443
    jump: ACCEPT
    protocol: tcp
- name: Firewall rule - drop any traffic without rule
  iptables:
    chain: INPUT
    jump: DROP
- name: Firewall rule - drop any traffic without rule
  iptables:
    chain: INPUT
    jump: DROP
- name: Install `netfilter-persistent` && `iptables-persistent` packages
  package:
      name: "{{item}}"
      state: present
  with_items:
     - iptables-persistent
     - netfilter-persistent
...
  • In order to simplify it, Create a new file, named “rules.v4″ in the “files/” directory of “iptables role” and paste the following iptables rule in there.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [151:12868]
:sshguard - [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A INPUT -j DROP
COMMIT
  • And the final step would be same as above role, ie. copying this “rules.v4” file in the “/etc/iptables” directory of the remote node.
  • So, the new improved “iptables role” will now look like the following.
---
# tasks file for iptables
- name: Install the `iptables` package
  package:
    name: iptables
    state: latest
- name: Flush existing firewall rules
  iptables:
    flush: true
- name: Inserting iptables rules in the "/etc/iptables/rules.v4" file.
  copy:
    src: /etc/ansible/playbooks/iptables/files/rules.v4
    dest: /etc/iptables/rules.v4
    owner: root
    group: root
    mode: 0644
- name: Install `netfilter-persistent` && `iptables-persistent` packages
  package:
      name: "{{item}}"
      state: present
  with_items:
     - iptables-persistent
     - netfilter-persistent
...

That’s all about this quick blogpost on how to efficiently write recursive co-related tasks in an ansible playbook.

Hope it helped.

Till next time. o/

[Note:- I will link this article in the old post as an update.]

by priyankasaggu119 at August 18, 2019 11:16 PM

August 13, 2019

Bhavin Gandhi

Organizing PythonPune Meetups

One thing I like most about meetups is, you get to meet new people. Talking with people, sharing what they are doing helps a lot to gain more knowledge. It is also a good platform to make connections with people having similar area of interests. I have been attending PythonPune meetup since last 2 years. In this blog post, I will be sharing some history about this group and how I got involved in organizing meetups.

by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at August 13, 2019 03:23 PM

August 11, 2019

Abhilash Raj

nonlocal statement in Python

Today, while spelunking across Python's documentation, I discovered a new statement which isn't very commonly known or used, nonlocal. Let's see what it is and what it does.

nonlocal statement is pretty old and was introduced by PEP-3014  and allows a variable to re-bind to a scope other than local and global, which is the enclosing scope. What does that mean? Consider this Python function:

def function():
    x = 100
    def incr_print(y):
        print(x + y)
    incr_print(100)

Trying to run this function will give you the expected output:

In [5]: function()          
200

In this case, we see that the inner function, incr_print is able to read the value of x from it's outer scope i.e. function.

Now consider this function instead:

def function():
    x = 100
    def incr(y):
        x = x + y
    incr(100)

It is pretty simple, but when you try to run it, it fails with:

---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
<ipython-input-2-30ca0b4348da> in <module>
----> 1 function()

<ipython-input-1-61421989fe16> in function()
      3     def increment(y):
      4         x = x + y
----> 5     increment(100)
      6 

<ipython-input-1-61421989fe16> in increment(y)
      2     x = 100
      3     def increment(y):
----> 4         x = x + y
      5     increment(100)
      6 

UnboundLocalError: local variable 'x' referenced before assignment

So, you can read from the variable in the outer scope, but you can't write to it because Python won't allow re-binding an object in outer scope. But, there is a way out of this, you must have read about global scope:

In [6]: z = 100                                                               
In [8]: def function(): 
   ...:     global z 
   ...:     z = z + 100 
   ...:                                                                       
In [9]: function()                                                             
In [10]: z
Out[10]: 200

So, you can actually refer to global scope using the global statement. So, to fill in the gap for re-binding to outer scope, a new nonlocal was added:

def function():
    x = 100
    def incr(y):
        nonlocal x
        x = x + y
    incr(100)
    print(x)

When you run this:

In [13]: function()   
200

You can read more details in the documentation and the PEP 3104 itself.

Thanks to Mario Jason Braganza for proof-reading and pointing out typos in this post.

by Abhilash Raj at August 11, 2019 07:21 PM

August 10, 2019

Armageddon

Git! Rebase and Strategies

In the previous topic, I talked about git remotes because it felt natural after branching and merging.

Now, the time has come to talk a little bit about rebase and some good cases to use it for.

Read more… (4 min remaining to read)

by Elia El Lazkani at August 10, 2019 04:00 AM

July 29, 2019

Sayan Chowdhury

Force git to use git:// instead of https://

Force git to use git:// instead of https://

I'm an advocate of using SSH authentication and connecting to services like Github, Gitlab, and many others. I do make sure that the use the git:// URL while cloning the repo but sometimes I do make mistake of using the https:// instead. Only to later realise when git prompts me to enter my username to authenticate the SSH connection. This is when I have to manually reset my git remote URL.

Today, I found a cleaner solution to this problem. I can use insteadOf to enforce the connection via SSH.

git config --global url."git@github.com:".insteadOf "https://github.com/"

This creates an entry in your .gitconfig:

[url "git@github.com:"]
	insteadOf = https://github.com/

Photo by Yancy Min on Unsplash

by Sayan Chowdhury at July 29, 2019 06:52 AM

July 26, 2019

Robin Schubert

Internet Connection

It's hard to believe that it's been only two years, since so many things in my life changed drastically. I have never before had the feeling of learning such an amount of new things. I found friends that I admire and whose company I enjoy.

I am not connected to the internet - I am connected to people, via the internet

Or as John Perry Barlow stated in the Declaration of the Independence of Cyberspace:

Ours is a world that is both everywhere and nowhere, but it is not where bodies live.

This applies with all positive and negative aspects. However, how much I wish I could come over sometimes, physically, to have a beer and chat or to not talk at all; I am greatful for our new home of minds where we can share our thoughts, free from physical constraints.

We have never met in person - That does not make the meeting less personal

Thank you, #dgplug

P.S.: Looking forward to learn more, share more thoughts, and nevertheless meet in person, eventually ;-)

by Robin Schubert at July 26, 2019 12:00 AM

July 22, 2019

Shakthi Kannan

Aerospike Wireshark Lua plugin workshop, Rootconf 2019, Bengaluru

Rootconf 2019 was held on June 21-22, 2019 at NIMHANS Convention Centre, in Bengaluru on topics ranging from infrastructure security, site reliability engineering, DevOps and distributed systems.

Rootconf 2019 Day 1

Day I

I had proposed a workshop titled “Shooting the trouble down to the Wireshark Lua Plugin” for the event, and it was selected. I have been working on the “Aerospike Wireshark Lua plugin” for dissecting Aerospike protocols, and hence I wanted to share the insights on the same. The plugin source code is released under the AGPLv3 license.

“Wireshark” is a popular Free/Libre and Open Source Software protocol analyzer for analyzing protocols and troubleshooting networks. The “Lua programming language” is useful to extend C projects to allow developers to do scripting. Since Wireshark is written in C, the plugin extension is provided by Lua. Aerospike uses the PAXOS family and custom built protocols for distributed database operations, and the plugin has been quite useful for packet dissection, and solving customer issues.

Rootconf 2019 Day 1

The workshop had both theory and lab exercises. I began with an overview of Lua, Wireshark GUI, and the essential Wireshark Lua interfaces. The Aerospike Info protocol was chosen and exercises were given to dissect the version, type and size fields. I finished the session with real-world examples, future work and references. Around 50 participants attended the workshop, and those who had laptops were able to work on the exercises. The workshop presentation and lab exercises are available in the aerospike-wireshark-plugin/docs/workshop GitHub repository.

I had follow-up discussions with the participants before moving to the main auditorium. “Using pod security policies to harden your Kubernetes cluster” by Suraj Deshmukh was an interesting talk on the level of security that should be employed with containers. After lunch, I started my role as emcee in the main auditorium.

The keynote of the day was by Bernd Erk, the CEO at Netways GmbH, who is also the co-founder of the Icinga project. He gave an excellent talk on “How convenience is killing open standards”. He gave numerous examples on how people are not aware of open standards, and take proprietary systems for granted. This was followed by flash talks from the audience. Jaskaran Narula then spoke on “Securing infrastructure with OpenScap: the automation way”, and also shared a demo of the same.

After the tea break, Shubham Mittal gave a talk on “OSINT for Proactive Defense” in which he shared the Open Source Intelligence (OSINT) tools, techniques and procedures to protect the perimeter security for an organization. The last talk of the day was by Shadab Siddiqui on “Running a successful bug bounty programme in your organization”.

Day II

Anant Shrivastava started the day’s proceedings with a recap on the talks from day one.

The first talk of the day was by Jiten Vaidya, co-founder and CEO at Planetscale who spoke on “OLTP or OLAP: why not both?”. He gave an architectural overview of vitess.io, a Free/Libre and Open Source sharding middleware for running OLTP workloads. The design looked like they were implementing the Kubernetes features on a MySQL cluster. Ratnadeep Debnath then spoke on “Scaling MySQL beyond limits with ProxySQL”.

After the morning break, Brian McKenna gave an excellent talk on “Functional programming and Nix for reproducible, immutable infrastructure”. I have listened to his talks at the Functional Programming conference in Bengaluru, and they have been using Nix in production. The language constructs and cases were well demonstrated with examples. This was followed by yet another excellent talk by Piyush Verma on “Software/Site Reliability of Distributed Systems”. He took a very simple request-response example, and incorporated site reliability features, and showed how complex things are today. All the major issues, pitfalls, and troubles were clearly explained with beautiful illustrations.

Aaditya Talwai presented his talk on “Virtuous Cycles: Enabling SRE via automated feedback loops” after the lunch break. This was followed by Vivek Sridhar’s talk on “Virtual nodes to auto-scale applications on Kubernetes”. Microsoft has been investing heavily on Free/Libre and Open Source, and have been hiring a lot of Python developers as well. Satya Nadella has been bringing in lot of changes, and it will be interesting to see their long-term progress. After Vivek’s talk, we had few slots for flash talks from the audience, and then Deepak Goyal gave his talk on “Kafka streams at scale”.

After the evening beverage break, Øystein Grøvlen, gave an excellent talk on PolarDB - A database architecture for the cloud. They are using it with Alibaba in China to handle petabytes of data. The computing layer and shared storage layers are distinct, and they use RDMA protocol for cluster communication. They still use a single master and multiple read-only replicas. They are exploring parallel query execution for improving performance of analytical queries.

Rootconf 2019 Day 2

Overall, the talks and presentations were very good for 2019. Time management is of utmost importance at Rootconf, and we have been very consistent. I was happy to emcee again for Rootconf!

July 22, 2019 03:00 PM

July 21, 2019

Rahul Jha

The [deceptive] power of visual explanation

Quite recently, I came across Jay Alammar’s, rather beautiful blog post, “A Visual Intro to NumPy & Data Representation”.

Before reading this, whenever I had to think about an array:


In [1]: import numpy as np

In [2]: data = np.array([1, 2, 3])

In [3]: data
Out[3]: array([1, 2, 3])

I used to create a mental picture somewhat like this:


       ┌────┬────┬────┐
data = │  1 │  2 │  3 │
       └────┴────┴────┘

But Jay, on the other hand, uses a vertical stack for representing the same array.

Image from Jay's blog post.

At the first glance, and owing to the beautiful graphics Jay has created, it makes perfect sense.

Now, if you had only seen this image, and I ask you the dimensions of data, what would your answer be?

The mathematician inside you barks (3, 1).

But, to my surprise, this wasn’t the answer:


In [4]: data.shape
Out[4]: (3,)

(3, ) eh? wondering, what would a (3, 1) array look like?


In [5]: data.reshape((3, 1))
Out[5]:
array([[1],
       [2],
       [3]])

Hmm, This begs the question: what is the difference between an array of shape (R, ) and (R, 1). A little bit of research landed me at this answer on StackOverflow. Let’s see:

The best way to think about NumPy arrays is that they consist of two parts, a data buffer which is just a block of raw elements, and a view which describes how to interpret the data buffer.

For example, if we create an array of 12 integers:


>>> a = numpy.arange(12)
>>> a
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11])

Then a consists of a data buffer, arranged something like this:


 ┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
 │  0 │  1 │  2 │  3 │  4 │  5 │  6 │  7 │  8 │  9 │ 10 │ 11 │
 └────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘

and a view which describes how to interpret the data:


    >>> a.flags
      C_CONTIGUOUS : True
      F_CONTIGUOUS : True
      OWNDATA : True
      WRITEABLE : True
      ALIGNED : True
      UPDATEIFCOPY : False
    >>> a.dtype
    dtype('int64')
    >>> a.itemsize
    8
    >>> a.strides
    (8,)
    >>> a.shape
    (12,)

Here the shape (12,) means the array is indexed by a single index which runs from 0 to 11. Conceptually, if we label this single index i, the array a looks like this:

i= 0    1    2    3    4    5    6    7    8    9   10   11
  ┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
  │  0 │  1 │  2 │  3 │  4 │  5 │  6 │  7 │  8 │  9 │ 10 │ 11 │
  └────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘

If we reshape an array, this doesn’t change the data buffer. Instead, it creates a new view that describes a different way to interpret the data. So after:

>>> b = a.reshape((3, 4))

the array b has the same data buffer as a, but now it is indexed by two indices which run from 0 to 2 and 0 to 3 respectively. If we label the two indices i and j, the array b looks like this:


i= 0    0    0    0    1    1    1    1    2    2    2    2
j= 0    1    2    3    0    1    2    3    0    1    2    3
┌────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┬────┐
│  0 │  1 │  2 │  3 │  4 │  5 │  6 │  7 │  8 │  9 │ 10 │ 11 │
└────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┴────┘

So, if were to actually have a (3, 1) matrix, we would have the exact same stack representation as a (3, ) matrix, thus creating the confusion.

So, what about the horizontal representation?

An argument can be made that the horizontal representation can be misinterpreted as a (1, 3) matrix, our brains are so accustomed to seeing it as 1-D array, that it is almost never the case (at least with folks who have worked with Python before).

Of course, it all makes perfect sense now, but it did take me a while to figure out what exactly was going under the hood here.


Visual Explanation of Fourier Series - Decomposition of a square wave into a sum of infinite sinusoids. From this answer on math.stackexchange.com

I also realized that while it is hugely helpful to visualize something when learning about it, but one should always take the visual representation with a grain of salt. As we can see, they are not entirely accurate.

For now, I’m sticking to my prior way of picturing a 1-D array as a horizontal list to avoid the confusion. I shall update the blog if I find anything otherwise.

My point is not that Jay’s drawings are flawed, but how susceptible we are to visual deceptions. In this case, it was relatively easier to figure out, because it was code, which forces one to pay attention to each and every detail, however minor it may be.

After all, human brain, prone to so many biases, taking shortcuts for nearly every decision we make (thus leaving room for sanity) isn’t anywhere near as perfect as it thinks it is.

July 21, 2019 06:30 PM

July 18, 2019

Rahul Jha

My Experience with OBM

If you want an overview about OBM, please read my post on the same .

I’ve participated in three sprints until now, in which I’ve completely failed myself, but I’ve already experiencing a drastic changes in my habits, which is good.

Here is what I’ve learned from this short, but significant experience:

First and foremost, the structure of OBM forces you to formalize things. You need to setup goals for yourself. Even better is that the setup makes it very difficult to be vague. You’ve to setup smaller tasks you need to achieve in order to complete the goal. The research, which is required for listing these tasks (thus, providing you with a big picture), getting a correct estimate of time required, helps you plan efficiently.

The next thing is priority - what do you decide to do now. I tend to perform better, if I’ve only 3 things on my TODO list, rather than 10. And OBM accommodates that - Send a list of all the tasks you want to work for the next 15 days, and then spend time doing them, rather than managing your list.

The difficult thing about writing that blog post you’d been thinking about for a week, or deciphering the math equation which just popped out of nowhere in that paper often isn’t actually writing, or performing an analysis. It’s taking out dedicated time from your time-poor schedule just for this. Once you get started, it’s way easier.

A nice analogy to this argument - the hardest part of going to a gym is actually physically going to the gym. Once you’re there all geared, and warmed up, exercise are much more fun. Getting over this initialization barrier is a must. The way I manage this is having slots in my schedule named “OBM”, where I only complete the tasks I’ve mentioned for OBM. No, you aren’t allowed to browse through twitter during that time - just start grinding, and you shall reap the produce afterwards.

One other important behavior I’ve observed is misalignment between what I believe I’m interested in, and how much I can afford to work towards it. If repeatedly, I find myself not indulging with the task, I know it’s not made for me and quit early (thus saving my time and resources to further go down the drain. More about this in “The Dip” by Seth Godin).

Conclusion

OBM serves as a great tool for introspection, monitoring one’s progress and getting things done. As side ‘effects’, it also gives you a taste of professionalism, punctuality and reporting relationships - a complete package aimed towards self improvement. \o/

But, if you’re overwhelmed with the notion of public accountability, just yet, I’d recommend you to run your own personal OBM and see the difference. If you want anymore advice, feel free to contact me (RJ722 on #dgplug, freenode)

July 18, 2019 06:30 PM

June 20, 2019

Bhavin Gandhi

infracloud.io: Introducing : Tracing Cassandra with Jaeger

This is the blog post about a plugin for Cassandra, which I wrote few days back. It covers basic information about three pillars of observability, which are logging, metrics and tracing. Thanks to Sameer, who helped me with my doubts related to Java and Maven. The blog post was published at infracloud.io on 19th June, 2019. Introducing : Tracing Cassandra with Jaeger

by Bhavin Gandhi (bhavin192@removethis.geeksocket.in) at June 20, 2019 04:07 PM

June 02, 2019

Shakthi Kannan

Building Erlang/OTP sources with Ansible

[Published in Open Source For You (OSFY) magazine, September 2017 edition.]

Introduction

Erlang is a programming language designed by Ericsson primarily for soft real-time systems. The Open Telecom Platform (OTP) consists of libraries, applications and tools to be used with Erlang to implement services that require high availability. In this article, we will create a test Virtual Machine (VM) to compile, build, and test Erlang/OTP from its source code. This allows you to create VMs with different Erlang release versions for testing.

The Erlang programming language was developed by Joe Armstrong, Robert Virding and Mike Williams in 1986 and released as free and open source software in 1998. It was initially designed to work with telecom switches, but is widely used today in large scale, distributed systems. Erlang is a concurrent and functional programming language, and is released under the Apache License 2.0.

Setup

A CentOS 6.8 Virtual Machine (VM) running on KVM will be used for the installation. Internet access should be available from the guest machine. The VM should have at least 2 GB of RAM alloted to build the Erlang/OTP documentation. The Ansible version used on the host (Parabola GNU/Linux-libre x86_64) is 2.3.0.0. The ansible/ folder contains the following files:

ansible/inventory/kvm/inventory
ansible/playbooks/configuration/erlang.yml

The IP address of the guest CentOS 6.8 VM is added to the inventory file as shown below:

erlang ansible_host=192.168.122.150 ansible_connection=ssh ansible_user=bravo ansible_password=password

An entry for the erlang host is also added to the /etc/hosts file as indicated below:

192.168.122.150 erlang

A ‘bravo’ user account is created on the test VM, and is added to the ‘wheel’ group. The /etc/sudoers file also has the following line uncommented, so that the ‘bravo’ user will be able to execute sudo commands:

## Allows people in group wheel to run all commands
%wheel	ALL=(ALL)	ALL

We can obtain the Erlang/OTP sources from a stable tarball, or clone the Git repository. The steps involved in both these cases are discussed below:

Building from the source tarball

The Erlang/OTP stable releases are available at http://www.erlang.org/downloads. The build process is divided into many steps, and we shall go through each one of them. The version of Erlang/OTP can be passed as an argument to the playbook. Its default value is the release 19.0, and is defined in the variable section of the playbook as shown below:

vars:
  ERL_VERSION: "otp_src_{{ version | default('19.0') }}"
  ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
  ERL_TOP: "{{ ERL_DIR }}/{{ ERL_VERSION }}"
  TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

The ERL_DIR variable represents the directory where the tarball will be downloaded, and the ERL_TOP variable refers to the top-level directory location containing the source code. The path to the test directory from where the tests will be invoked is given by the TEST_SERVER_DIR variable.

Erlang/OTP has mandatory and optional package dependencies. Let’s first update the software package repository, and then install the required dependencies as indicated below:

tasks:
  - name: Update the software package repository
    become: true
    yum:
      name: '*'
      update_cache: yes

  - name: Install dependencies
    become: true
    package:
      name: "{{ item }}"
      state: latest
    with_items:
      - wget
      - make
      - gcc
      - perl
      - m4
      - ncurses-devel
      - sed
      - libxslt
      - fop

The Erlang/OTP sources are written using the ‘C’ programming language. The GNU C Compiler (GCC) and GNU Make are used to compile the source code. The ‘libxslt’ and ‘fop’ packages are required to generate the documentation. The build directory is then created, the source tarball is downloaded and it is extracted to the directory mentioned in ERL_DIR.

- name: Create destination directory
  file: path="{{ ERL_DIR }}" state=directory

- name: Download and extract Erlang source tarball
  unarchive:
    src: "http://erlang.org/download/{{ ERL_VERSION }}.tar.gz"
    dest: "{{ ERL_DIR }}"
    remote_src: yes

The ‘configure’ script is available in the sources, and it is used to generate the Makefile based on the installed software. The ‘make’ command will build the binaries from the source code.

- name: Build the project
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - ./configure
    - make
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

After the ‘make’ command finishes, the ‘bin’ folder in the top-level sources directory will contain the Erlang ‘erl’ interpreter. The Makefile also has targets to run tests to verify the built binaries. We are remotely invoking the test execution from Ansible and hence -noshell -noinput are passed as arguments to the Erlang interpreter, as show in the .yaml file.

- name: Prepare tests
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - make release_tests
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

- name: Execute tests
  shell: "cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl -noshell -noinput -s ts install -s ts smoke_test batch -s init stop"

You need to verify that the tests have passed successfully by checking the $ERL_TOP/release/tests/test_server/index.html page in a browser. A screenshot of the test results is shown in Figure 1:

Erlang test results

The built executables, libraries can then be installed on the system using the make install command. By default, the install directory is /usr/local.

- name: Install
  command: "{{ item }} chdir={{ ERL_TOP }}"
  with_items:
    - make install
  become: true
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

The documentation can also be generated and installed as shown below:

- name: Make docs
  shell: "cd {{ ERL_TOP }} && make docs"
  environment:
    ERL_TOP: "{{ ERL_TOP }}"
    FOP_HOME: "{{ ERL_TOP }}/fop"
    FOP_OPTS: "-Xmx2048m"

- name: Install docs
  become: true
  shell: "cd {{ ERL_TOP }} && make install-docs"
  environment:
    ERL_TOP: "{{ ERL_TOP }}"

The total available RAM (2 GB) is specified in the FOP_OPTS environment variable. The complete playbook to download, compile, execute the tests, and also generate the documentation is given below:

---
- name: Setup Erlang build
  hosts: erlang
  gather_facts: true
  tags: [release]

  vars:
    ERL_VERSION: "otp_src_{{ version | default('19.0') }}"
    ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
    ERL_TOP: "{{ ERL_DIR }}/{{ ERL_VERSION }}"
    TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

  tasks:
    - name: Update the software package repository
      become: true
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      become: true
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - wget
        - make
        - gcc
        - perl
        - m4
        - ncurses-devel
        - sed
        - libxslt
        - fop

    - name: Create destination directory
      file: path="{{ ERL_DIR }}" state=directory

    - name: Download and extract Erlang source tarball
      unarchive:
        src: "http://erlang.org/download/{{ ERL_VERSION }}.tar.gz"
        dest: "{{ ERL_DIR }}"
        remote_src: yes

    - name: Build the project
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - ./configure
        - make
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Prepare tests
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - make release_tests
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Execute tests
      shell: "cd {{ TEST_SERVER_DIR }} && {{ ERL_TOP }}/bin/erl -noshell -noinput -s ts install -s ts smoke_test batch -s init stop"

    - name: Install
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - make install
      become: true
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

    - name: Make docs
      shell: "cd {{ ERL_TOP }} && make docs"
      environment:
        ERL_TOP: "{{ ERL_TOP }}"
        FOP_HOME: "{{ ERL_TOP }}/fop"
        FOP_OPTS: "-Xmx2048m"

    - name: Install docs
      become: true
      shell: "cd {{ ERL_TOP }} && make install-docs"
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

The playbook can be invoked as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/erlang.yml -e "version=19.0" --tags "release" -K

Build from Git repository

We can build the Erlang/OTP sources from the Git repository. The complete playbook is given below for reference:

- name: Setup Erlang Git build
  hosts: erlang
  gather_facts: true
  tags: [git]

  vars:
    GIT_VERSION: "otp"
    ERL_DIR: "{{ ansible_env.HOME }}/installs/erlang"
    ERL_TOP: "{{ ERL_DIR }}/{{ GIT_VERSION }}"
    TEST_SERVER_DIR: "{{ ERL_TOP }}/release/tests/test_server"

  tasks:
    - name: Update the software package repository
      become: true
      yum:
        name: '*'
        update_cache: yes

    - name: Install dependencies
      become: true
      package:
        name: "{{ item }}"
        state: latest
      with_items:
        - wget
        - make
        - gcc
        - perl
        - m4
        - ncurses-devel
        - sed
        - libxslt
        - fop
        - git
        - autoconf

    - name: Create destination directory
      file: path="{{ ERL_DIR }}" state=directory

    - name: Clone the repository
      git:
        repo: "https://github.com/erlang/otp.git"
        dest: "{{ ERL_DIR }}/otp"

    - name: Build the project
      command: "{{ item }} chdir={{ ERL_TOP }}"
      with_items:
        - ./otp_build autoconf
        - ./configure
        - make
      environment:
        ERL_TOP: "{{ ERL_TOP }}"

The ‘git’ and ‘autoconf’ software packages are required for downloading and building the sources from the Git repository. The Ansible Git module is used to clone the remote repository. The source directory provides an otp_build script to create the configure script. You can invoke the above playbook as follows:

$ ansible-playbook -i inventory/kvm/inventory playbooks/configuration/erlang.yml --tags "git" -K

You are encouraged to read the complete installation documentation at: https://github.com/erlang/otp/blob/master/HOWTO/INSTALL.md.

June 02, 2019 07:45 PM

April 08, 2019

Subho

Increasing Postgres column name length

This blog is more like a bookmark for me, the solution was scavenged from internet. Recently I have been working on an analytics project where I had to generate pivot transpose tables from the data. Now this is the first time I faced the limitations set on postgres database. Since its a pivot, one of my column would be transposed and used as column names here, this is where things started breaking. Writing to postgres failed with error stating column names are not unique. After some digging I realized Postgres has a column name limitation of 63 bytes and anything more than that will be truncated hence post truncate multiple keys became the same causing this issue.

Next step was to look at the data in my column, it ranged from 20-300 characters long. I checked with redshift and Bigquery they had similar limitations too, 128 bytes. After looking for sometime found a solution, downloaded the postgres source, changed NAMEDATALEN to 301(remember column name length is always NAMEDATALEN – 1) src/include/pg_config_manual.h, followed the steps from postgres docs to compile the source and install and run postgres. This has been tested on Postgres 9.6 as of now and it works.

Next up I faced issues with maximum number columns, my pivot table had 1968 columns and postgres has a limitation of 1600 total columns. According to this answer I looked into the source comments and that looked quite overwhelming 😛 . Also I do not have a control over how many columns will be there post pivot so no matter whatever value i set , in future i might need more columns, so instead I handled the scenario in my application code to split the data across multiple tables and store them.

References:

  1. https://til.hashrocket.com/posts/8f87c65a0a-postgresqls-max-identifier-length-is-63-bytes
  2. https://stackoverflow.com/questions/6307317/how-to-change-postgres-table-field-name-limit
  3. https://www.postgresql.org/docs/9.6/install-short.html
  4. https://dba.stackexchange.com/questions/40137/in-postgresql-is-it-possible-to-change-the-maximum-number-of-columns-a-table-ca

by subho at April 08, 2019 09:25 AM

April 06, 2019

Tosin Damilare James Animashaun

To be Forearmed is to be Help-ready

I felt compelled to write this after my personal experience trying to get help with my code on IRC.

We all love to make the computer do things exactly the way we want, so some of us choose to take the bold step of learning to communicate with the machine. And it is not uncommon to find many of our burgeoning kind go from location to location on the web space trying to get help along the way. We are prompt to ask questions when we sight help.

When you learn to program, you are often encouraged to learn by doing.

The domain of computer programming or software development is a very practical one. Before now, I had carried this very principle everywhere with me -- in fact, preached it -- but hadn't really put it to use.

The thing about learning languages (or technologies) by reading big manuals is that, often times, beginners will approach the process like they would any other literature book. But that is clearly a wrong approach as empirical evidence has shown. You don't read these things to simply stomach them. Instead, you swallow and then post-process. In essence, you ruminate over stuff.


In truth, the only way you can really process what you read is to try things out and see results for yourself.

Weeks ago, while building an app, I visited IRC frequently to ask questions on just about everything that was unclear to me. While this mode of communication and seeking help is encouraged, abuse of it is strongly discouraged. The good guys over on the IRC channels get pissed off when it appears you're boycotting available resources like documentation, preferring to be spoonfed the whole time. (Remember this is not Quora, where the philosophy is for you to ask more and more questions).

This was the sort of thing that happened to me when I began flooding the channels with my persistent querying. Most of the time the IRC folks kept pointing me to the documentation, as workarounds for most of the issues I had were already documented. A lot of things became much clearer when I decided to finally read-the-docs.

What did I learn from that? "Do your own research!" It's so easy to skip this part, but if you make efforts at finding things out for yourself, you'll be surprised at how much you can dig out without having to bug people. However, this does not guarantee that even the few important questions you'll ask may not be met with hostility, but do not let that discourage you. The people who appear to be unwelcoming are doing so only as a way to discourage you from being over-dependent on the channel. Another advantage of finding things for yourself is that, you learn the why and not just the how.

I think it's fair to quote Armin Ronacher here,

"And it's not just asking questions; questioning other people, like what other people do or what you do yourself.

By far, the worst parts in all of my libraries are where I took the design from somewhere else. And it's not because I know better, it's because pretty much everything everybody does at any point in time has some sort of design decision behind it ... that major decision process.

So someone came up with a solution for a particular problem, and he thought about it and then wrote it down. But if you look at someone else's design, you might no longer understand why the decision was made in the first place. And ... in fact, if the original implementation is ten years old, who knows if the design ideas behind it are still entirely correct."


Personally, I like to answer questions and help put people on track. Nonetheless, if the queries got too overwhelming -- especially coming from the same person -- I would eventually lose interest in answering questions.


Let me remind you of some tidbits or etiquettes of IRC:

  • Construct your questions well (concise, well written and straight-to-the-point questions are more likely to attract help)

  • Don't EVER post code in a channel! Pastebin[1] it and share the link in the channel instead. While at it, don't post your entire code (unless you specifically need to). Post only the relevant portion -- the one you have an issue with. The only exception to this is if the snippet of code is considerably short, say one or two lines.

  • Don't be overly respectful. Yes, dont be too respectful -- cut all the 'Sirs'. Only be moderately polite.

  • Ensure you have and use a registered nick. This gives you an identity.

  • This last one is entirely my opinion but it's also based on what I have observed. Don't just be a leech, try to contribute to the community. Answer questions when you can.


So where do you look to before looking to IRC? There are three sources you may read from before turning to internet-relay-chat for help:

  • Read the documentation. * Documentation is the manual the creator or experts of a software product or tool provide their users with. So you want to know the ins and outs of a technology? That's the right place to look.

  • Read blog posts related to your topic-area. Blog posts are often based on people's experiences, so you're likely to find help from there, especially if the writer has faced the same issue. Remember to bookmark the really helpful ones as you go ;).

  • Last and very important. Read the source code!. This is two-fold: First is actually looking into your own code carefully and seeing what syntax or semantic errors you might have made. Secondly, you look into the original code of libraries/frameworks you are using if they are open source, otherwise revert to documentation. With this, you have it all stripped to its bare bones. Point blank! The source code reveals everything you need to know, once you know how to read it.


So why not arm yourself properly before going to post that question. That way, you would not only make it easier to get help [for yourself], but you would be better informed.



  1. Some Pastebin platforms I use:

Note: Because Hastebin heavily depends on Javascript, some people have complained of text-rendering issues possibly arising from browser-compatibility issues with it. So take caution using it. That said, I love its ease-of-use. It supports the use of keyboard shortcuts such as [Ctrl]+[S] to Save.

by Tosin Damilare James Animashaun at April 06, 2019 09:40 PM

February 23, 2019

Farhaan Bukhsh

The Late End Year Review – 2018

I know I am really, really late, but better late than never.
This past year has been really formative for me.

In this short personal retrospective post, I am just going to divide my experience into 3 categories, the good, the bad and the ugly best.

The Bad

  1. My father got really sick and I got really scared by the thought of losing him.
  2. I moved on from the first company I joined, because I was getting a bit stifled and yearned to learn and grow more.
  3. My brother got transferred, so I had to live without family for the first time in my life. I had never lived alone before this.
  4. I was not able to take the 3 month sabbatical, I thought I could.
  5. I couldn’t find a stable home and was on the run from one place to another constantly.

The Good

  1. I learnt how to live alone. I learnt how to find peace while being alone. Because of this, I could also explore more books and more importantly I could spend more time by myself figuring out what kind of person I want to become.
  2. I got a job with Clootrack, where people are amazing to work with and there is so much to learn.
  3. I found the chutzpah to quit my job, even thought I didn’t have a back up. In roundabout way, it gave me the strength to take risks in life and the courage to handle its consequences.
  4. Bad times help you discover good friends. I am not trying to boast about it, (but you are 😝– ed) but I am thankful to God that I have an overwhelming number of good friends.
  5. I got asked out in a coffee shop! This has never happened to me before. (BUT YES! THIS HAPPENED!).
  6. I wrote few poems this year, all of them heartfelt.
  7. I gained a measure of financial independence and the experience of how to handle things when everything is going south.
  8. I finally wrote a project and released it. I was fortunate enough to get few contributors.
  9. I am more aware now, and have stopped taking people and time for granted.
  10. Started Dosa Culture.
  11. Applied to more conferences to give a talk.

The Best

  1. I read more this year and got to learn about a lot of things from Feynman to Krebs. I explored fiction, non fiction, self help, and humour.
  2. I went to Varanasi (home) more than I ever did in the last five years of my life. I spent lots of time with my parents. I am planning to do it more.
  3. Went on a holiday to Pondicherry. I went for a holiday for the first time, with the money I saved up for the trip. I saw the sunrise of 1st January sitting on Rock beach.
  4. Got rejected at all the conferences I applied for. No matter. It motivates me even more, to try harder, to dance on the edge, to learn more, do more. It helps me strive for greatness, while also being a good reality check.
  5. Spent more time on working on hobby projects and contributing to open source.
  6. Got a chance to be a visiting faculty, and teach programming in college.
  7. Lived more! Learnt More! Loved More!

I feel I might be missing quite a few things in the lists, but these are the few, that helped me grow as a person. They impacted me deeply and changed my way of looking at life.

I hope the coming year brings better experiences and more learning!

Until then,
Live Long and Prosper! (so cheesy – ed)

by fardroid23 at February 23, 2019 03:13 PM

February 05, 2019

Anwesha Das

Have a safer internet

Today, 5th February is the safer internet day. The primary aim of this day is to advance the safe and positive use of digital technology for children and young people. Moreover, it promotes the conversation over this issue. So let us discuss a few ideas.The digital medium is the place where we live our today. It has become our world. However, as compare to the physical world to this world and its rules are unfamiliar to us. Also, adding to that with the advent of social media we are putting our lives, every detail of it in and at the domain of social media. We are then letting governments, industrial lords, political parties, snoops, and the society to judge, to see and monitor us. We, the fragile, vulnerable us, do not have any other option but to watch our freedom, privacy vanishing.

Do we not have anything to save ourselves? Ta Da! Here are some basic ideas are the following which you can try to follow in your everyday life to keep yourself safe in the digital world.

Use unique passphrases

Use passphrases instead of passwords.Passwords are easy to break as well as easy to copy so instead of using “Frank” (a name) or “Achinpur60”(a part of your address), use passphrases like “DiscountDangerDumpster”. It is easy to remember and hard to break. You can assemble 2 more languages (it is easy for us, Indians, right?). I used diceware to generate that password. Moreover, by unique what I mean is that do not use the SAME PASSWORD EVERYWHERE. I can feel how difficult, tedious, impossible it is for you to remember all the lengthy, difficult passphrases (now not passwords remember!) for all your accounts. However, nothing can be done with this. If someone can get your passphrase for one account, he will be able to all of them. Unique passphrases help a lot in this case.

Use password managers

To solve your above-mentioned problem of remembering long passphrases you have a magic thing called password manager. Just move your wand (read mouse) once, and you can find your long passphrases safely protected in their safe vaults. There are many different password managers LastPass, KeePassXC, etc. If you want to know more about this, please read it here.

Do not leave your device (computer, phone, etc) unlocked

My 2 year old once had typed some not so kind words (thanks to autocorrect) to my in-laws and the lovely consequence it brought still shivers me. But thankfully so it was not with someone, having the good technical knowledge and not so good intention, who could cause much greater damage if unlucky then irrecoverable damage than this. So please do not leave your device unlocked.

Do not share your password or your device with anyone

The similar kinds of danger, as aforementioned it poses if you share your password with anyone.

Do block your webcam and phone’s camera

It is now a well-known fact that attackers are spying on us through our web cameras. They are deceiving users by installing webcam spyware. Many of us may think “oh we are safe, our device has indicator lights, so we will be knowing when and if there is any video recording happening.” It is very much possible to disable the activity light by changing the configurations and software hacks. So even if there is no light, your video can very well be taken.

Do not ignore security updates

Most of the time when a security update notification pops up in the morning we very smoothly ignore it for our morning dose of news or checking our social media feed. However, that is the most irresponsible thing you want to do in your day. It may be last chance to secure yourself from the future danger. Mainer times the cyber attackers take advantage of your old, outdated software and attack you through it. It may be your old PDF reader, web browser or your operating system. So, this the most primary thing to your digital security lesson 101 is to keep your software up to date.

Acquire some basic knowledge about your machine

I know (trust me I have passed the phase) please acquire some basic knowledge about your machine, eg which version of operating system you are using, the other software on your machine and their version number. If and when they require any updates or not.

Do not download random websites from the internet.

Do not download random websites from the internet they might contain malware, virus. It might not only affect your machine but all the devices in the network. So, please check the website you are downloading from.

The same caution as above goes for this also. Do not click on the random URLs you receive over email or social media sites.

Use two-factor authentication

Two-factor authentication merely is two steps of validation. It adds an extra layer of security in and for your device. In 2FA the user needs to put two passwords instead of one. It is advisable that you have your 2FA installed on your mobile phone, or even better, use a hardware token like Yubikey. So that if someone wants to hack your account, then they have to get hold of both password and the phone.

Use Tor network

Tor Project is the most trusted and proposed project to remain private, to retain your anonymity. Tor is defined as “free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities, and relationships, and state security.” in their website. Have a look at this to know more.

Take proper legal action

If something terrible happens to you online, please visit the local cyber crime department and lodge a formal complaint there. The local police stations do not deal with the matter related to cyber crimes. So you might directly want to go to the appropriate cyber security cell. If you do not have any idea that where is it, what to do there etc. You go to your Local Police Station take their advice, the information you need and then go the cyber security cell.

Learn GPG encryption

It is always suggested to know and learn GPG, Gnu Privacy Guard if you are up to that level of learning technical things. It is a bit difficult, but, surely a very useful tool to keep your privacy secured.

The steps I mentioned above may sound "too much" to maintain. But let us pretend that your house is your device and password is key to enter there. You normally follow all possible way to keep your house keys safe so the same rules apply here also. The rules are nothing but an habit,like getting up in the morning, it seems difficult for first few times but after that it is organic and normal as it can be. So, build the habbit of keeping safe, only using these tools will not offer you the desired results you need.

Hope you have a happy, safe life in the digital world.

by Anwesha Das at February 05, 2019 05:40 PM

November 27, 2018

Anwesha Das

Upgraded my blog to Ghost 2.6

I have been maintaining my blog. It is a self hosted Ghost blog, where I have my theme as Casper, the Ghost default. In the recent past, September 2018, Ghost has updated its version to 2.0. Now it is my time to update mine.

It is always advisable to test it before running it into production server. I maintain a stage instance for the same. I test any and all the changes there before touching the production server. I did the same thing here also.

I have exported Ghost data into a Json file. For the ease to read I have prettified the file. I removed the old database and started the container for the new Ghost. I reimported the data into the new Ghost using the json file.

I had another problem to solve, the theme. I used to have Casper as my theme. But the new look of it, is something I do not like for my blog, which is predominantly a text blog. I was unable to fix the same theme for the new Ghost. Therefore I chose to use Attila as my theme. I did some modifications, uploaded and enabled it for my blog. A huge gratitude to the Ghost community and the developers, it was a real smooth job.

by Anwesha Das at November 27, 2018 02:57 PM

October 29, 2018

Anu Kumari Gupta (ann)

Enjoy octobers with Hacktoberfest

I know what you are going to do this October. Scratching your head already? No, don’t do it because I will be explaining you in details all that you can do to make this october a remarkable one, by participating in Hacktoberfest.

Guessing what is the buzz of Hacktoberfest all around? 🤔

Hacktoberfest is like a festival celebrated by people of open source community, that runs throughout the month. It is the celebration of open source software, and welcomes everyone irrespective of the knowledge they have of open source to participate and make their contribution.

  • Hacktoberfest is open to everyone in our global community!
  • Five quality pull requests must be submitted to public GitHub repositories.
  • You can sign up anytime between October 1 and October 31.

<<<<Oh NO! STOP! Hacktoberfest site defines it all. Enough! Get me to the point.>>>>

Already had enough of the rules and regulations and still wondering what is it all about, why to do and how to get started? Welcome to the right place. This hacktoberfest is centering a lot around open source. What is it? Get your answer.

What is open source?

If you are stuck in the name of open source itself, don’t worry, it’s nothing other than the phrase ‘open source’ mean. Open source refers to the availability of source code of a project, work, software, etc to everyone so that others can see, modify changes to it that can be beneficial to the project, share it, download it for use. The main aim of doing so is to maintain transparency, collaborative participation, the overall development and maintenance of the work and it is highly used for its re-distributive nature. With open source, you can organize events and schedule your plans and host it onto an open source platform as well. And the changes that you make into other’s work is termed as contribution. The contribution do not necessarily have to be the core code. It can be anything you like- designing, organizing, documentation, projects of your liking, etc.

Why should I participate?

The reason you should is you get to learn, grow, and eventually develop skills. When you make your work public, it becomes helpful to you because others analyze your work and give you valuable feedback through comments and letting you know through issues. The kind of work you do makes you recognized among others. By participating in an active contribution, you also find mentors who can guide you through the project, that helps you in the long run.

And did I tell you, you get T-shirts for contributing? Hacktoberfest allows you to win a T-shirt by making at least 5 contributions. Maybe this is motivating enough to start, right? 😛 Time to enter into Open Source World.

How to enter into the open source world?

All you need is “Git” and understanding of how to use it. If you are a beginner and don’t know how to start or have difficulty in starting off, refer this “Hello Git” before moving further. The article shows the basic understanding of Git and how to push your code through Git to make it available to everyone. Understanding is much more essential, so take your time in going through it and understanding the concept. If you are good to go, you are now ready to make contribution to other’s work.

Steps to contribute:

Step 1; You should have a github account.

Refer to the post “Hello Git“, if you have not already. The idea there is the basic understanding of git workflow and creating your first repository (your own piece of work).

Step 2: Choose a project.

I know choosing a project is a bit confusing. It seems overwhelming at first, but trust me once you get the insights of working, you will feel proud of yourself. If you are a beginner, I would recommend you to first understand the process by making small changes like correcting mistakes in a README file or adding your name to the contributors list. As I already mention, not every contributions are into coding. Select whatever you like and you feel that you can make changes, which will improve the current piece of work.

There are numerous beginner friendly as well as cool projects that you will see labelled as hacktoberfest. Pick one of your choice. Once you are done with selecting a project, get into the project and follow the rest.

Step 3: Fork the project.

You will come across several similar posts where they will give instructions to you and what you need to perform to get to the objective, but most important is that you understand what you are doing and why you are doing. Here am I, to explain you, why exactly you need to perform these commands and what does these terms mean.

Fork means to create a copy of someone else’s repository and add it to your own github account. By forking, you are making a copy of the forked project for yourself to make changes into it. The reason why we are doing so, is that you would not might like to make changes to the main repository. The changes you make has to be with you until you finalize it to commit and let the owner of the project know about it.

You must be able to see the fork option somewhere at the top right.

screenshot-from-2018-10-29-22-10-36.png

Do you see the number beside it. These are the number of forks done to this repository. Click on the fork option and you see it forking as:

Screenshot from 2018-10-29 22-45-09

Notice the change in the URL. You will see it is added in your account. Now you have the copy of the project.

Step 4: Clone the repository

What cloning is? It is actually downloading the repository so that you make it available in your desktop to make changes. Now that you have the project in hand, you are ready to amend changes that you feel necessary. It is now on your desktop and you know how to edit with the help of necessary tools and application on your desktop.

“clone or download” written in green button shows you a link and another option to directly download.

If you have git installed on your machine, you can perform commands to clone it as:

git clone "copied url"

copied url is the url shown available to you for copying it.

Step 5: Create a branch.

Branching is like the several directory you have in your computer. Each branch has the different version of the changes you make. It is essential because you will be able to track the changes you made by creating branches.

To perform operation in your machine, all you need is change to the repository directory on your computer.

 cd  <project name>

Now create a branch using the git checkout command:

git checkout -b 

Branch name is the name given by you. It can be any name of your choice, but relatable.

Step 6: Make changes and commit

If you list all the files and subdirectories with the help of ls command, your next step is to find the file or directory in which you have to make the changes and do the necessary changes. For example. if you have to update the README file, you will need an editor to open the file and write onto it. After you are done updating, you are ready for the next step.

Step 7: Push changes

Now you would want these changes to be uploaded to the place from where it came. So, the phrase that is used is that you “push changes”. It is done because after the work i.e., the improvements to the project, you will be willing to let it be known to the owner or the creator of the project.

so to push changes, you perform as follows:

git push origin 

You can reference the URL easily (by default its origin). You can alternatively use any shortname in place of origin, but you have to use the same in the next step as well.

Step 8: Create a pull request

If you go to the repository on Github, you will see information about your updates and beside that you will see “Compare and pull request” option. This is the request made to the creator of the main project to look into your changes and merge it into the main project, if that is something the owner allows and wants to have. The owner of the project sees the changes you make and do the necessary patches as he/she feels right.

And you are done. Congratulations! 🎉

Not only this, you are always welcome to go through the issues list of a project and try to solve the problem, first by commenting and letting everyone know whatever idea you have to  solve the issue and once you are approved of the idea, you make contributions as above. You can make a pull request and reference it to the issue that you solved.

But, But, But… Why don’t you make your own issues on a working project and add a label of Hacktoberfest for others to solve?  You will amazed by the participation. You are the admin of your project. People will create issues and pull requests and you have to review them and merge them to your main project. Try it out!

I  hope you find it useful and you enjoyed doing it.

Happy Learning!

by anuGupta at October 29, 2018 08:20 PM

October 22, 2018

Sanyam Khurana

Event Report - DjangoCon US

If you've already read about my journey to PyCon AU, you're aware that I was working on a Chinese app. I got one more month to work on the Chinese app after PyCon AU, which meant improving my talk to have more things such as passing the locale info in async tasks, switching language in templates, supporting multiple languages in templates etc.

I presented the second version of the talk at DjangoCon US. The very first people I got to see again, as soon as I entered DjangoCon US venue were Russell and Katie from Australia. I was pretty much jet-lagged as my International flight got delayed by 10 hours, but I tried my best to deliver the talk.

Here is the recording of the talk:

You can see the slides of my talk below or by clicking here:

After the conference, we also had a DSF meet and greet, where I met Frank, Rebecca, Jeff, and a few others. Everyone was so encouraging and we had a pretty good discussion around Django communities. I also met Carlton Gibson, who recently became a DSF Fellow and also gave a really good talk at DjangoCon on Your web framework needs you!.

Carol, Jeff, and Carlton encouraged me to start contributing to Django, so I was waiting eagerly for the sprints.

DjangoCon US with Mariatta Wijaya, Carol Willing, Carlton Gibson

Unfortunately, Carlton wasn't there during the sprints, but Andrew Pinkham was kind enough to help me with setting up the codebase. We were unable to run the test suite successfully and tried to debug that, later we agreed to use django-box for setting things up. I contributed few PRs to Django and was also able to address reviews on my CPython patches. During the sprints, I also had a discussion with Rebecca and we listed down some points on how we can lower the barrier for new contributions in Django and bring in more contributors.

I also published a report of my two days sprinting on Twitter:

DjangoCon US contributions report by Sanyam Khurana (CuriousLearner)

I also met Andrew Godwin & James Bennett. If you haven't yet seen the Django in-depth talk by James I highly recommend you to watch that. It gave me a lot of understanding on how things are happening under the hood in Django.

It was a great experience altogether being an attendee, speaker, and volunteer at DjangoCon. It was really a very rewarding journey for me.

There are tons of things we can improve in PyCon India, taking inspiration from conferences like DjangoCon US which I hope to help implement in further editions of the conference.

Here is a group picture of everyone at DjangoCon US. Credits to Bartek for the amazing click.

DjangoCon US group picture

I want to thank all the volunteers, speakers and attendees for an awesome experience and making DjangoCon a lot of fun!

by Sanyam Khurana at October 22, 2018 06:57 AM

September 07, 2018

Farhaan Bukhsh

6 Bags and A Carton

This is not a technical post; this is something that I have been going through, in life right now. A few months ago, when I left my first job (another time, another post 😉 ), I had a plan. I wanted to take few months off and work on my technical knowledge and write amazing software and get a lot of learning out of my little sabbatical.

But I was not able to do that for a few reasons, primo being I had to move homes in Bangalore because my brother got transferred, so the savings that I had set aside wouldn’t be enough. This was not the end. When it rains, it pours apparently. My dad got super sick, he had a growth near his kidney which the doctors diagnosed as cancer. I got really scared with the situation I was going through. The thing about your parents is that no matter how much you fight with them or how much they “control” you; at the end of the day the thought of losing them can scare the hell out of you. For me, they are my biggest support system so I was not scared, I was terrified.

I gave it a really deep thought and took a call. I needed to find a job. The sabbatical could wait. I started applying to companies and talking to people if they needed extra hand at work. One piece of advice – never leave a job unless you have another in hand. Luckily, I had my small pot of gold, savings, so even in this phase I was sustaining myself. Yes, savings are real and you should have a sufficient amount at any given point of your life. This helps you to take the hard decisions and also to think independently (what Jason calls F*ck you money).

It still feels like a nightmare to me. I use to feel that I will wake up and it will all be over. Reality check; it wasn’t a dream so I have to live with it and make efforts to overcome this situation.

Taking up a job for me was important for two reasons,

  1. I have to sustain myself
  2. I need to have a back up in case my dad needs something (I also have super amazing siblings who were doing the same)

I realised one thing about prayer and God; yes, I believe in God, and I don’t know if prayer works but you definitely get the strength to face your problems and the unknown. I use to call my dad regularly asking how he was doing and some days he could not speak all that much and he use to talk in his weak tone. I use to cry. I was in so much pain although it was not physical or visible. And then, I would cry again.

But tough times teach you a lot, it shows you real friends, it shows you the people you care for and as Calvin’s dad would have said, “It build character!”. I have been through bad times before and the thing about time is , “It changes!”. I knew someday this bad time I am going through will change. Either the agony I am going through will reduce or I will get used to it.

So as I was giving interviews within a month of me moving on from my old job, I was offered one at Clootrack. I like the people who interviewed me and I like that ideas they have been working on. But I have seen people change and I have gone through a bad experiences and at no point of time did I want to repeat past mistakes, so I did a thorough background check before I said yes to them. I got a really good response so here I am working with them.

The accommodation problem that I had was my brother was shifting out of his quarters  and I used to live with him. Well, I helped him pack and I still remember the time when I was bidding farewell to him and my sister-in-law. I had tears in my eyes and after my goodbyes, the moment I stepped in the house I could feel the emptiness and I cried the whole night.  I  could stay at the old place for a week, not more. At this point I can’t thank Abhinav enough that he came as  a support I needed. He graciously let me live with him as long as  I wanted to. Apparently he needed help, paying his bills :P.  This bugger would never accept the fact, he helped me. When dad’s condition was getting bad he gave me really solid moral support. I had also shared my situation with Jason, Abraar, Kushal and Sayan. I received a good amount of moral support from each one of them, specially Jason. I use to tell him everything and he would just calm me down and talk me through it.

So when I shifted to Abhinav’s place all I had was 6 bags and a carton. My whole life was 6 bags and a carton. My office was a 2 hour bus ride one way and another 2 hours to come back. But I didn’t have any problems with this arrangement because this was the least of my problems. I literally use to live out of my bags and I wasn’t sure this arrangement would last long. I had some really amazing moments with Abhinav, I enjoyed our ups and downs and those little fights and leg pulling.

Well, my dad is still not in the best of his health, but he is doing better now. I visit my family more frequently now and yes call them regularly with a miss. I realised the value of health after seeing my dad. I went home after a month of joining Clootrack and stayed with him for a whole month and worked remotely, we visited few doctors and they said he is doing better. After coming back I realised I was not getting any time for myself so I shifted to a NestAway near my office. Although I feel I’ve gotten used to the agony, you never know what life has in store for you next.
It feels much better now, though.

I thank God for giving me strength and my friends and family for supporting me in a lot of different ways.

With Courage in my Heart,
And Faith over Head

 

 

by fardroid23 at September 07, 2018 04:12 AM

September 03, 2018

Sanyam Khurana

Event Report - DjangoCon AU & PyCon AU

I was working on a Chinese app for almost 4 months and developing a backend that supports multiple languages. I spent almost daily reading documentation in Chinese and converting it through Google Translate app to integrate third-party APIs. It was painful, yet rewarding in terms of the knowledge that I gained from the Django documentation and several other resources about how to support multiple languages in Django based backends and best practices around it.

While providing multilingual support through Django backend, I realized that every now and then I was hitting a block and then had to read through the documentation and research the web. There were certain caveats that I got around while researching stuff whenever I was stuck and noted them as "gotcha moments" that I decided to cover later in a talk.

I got an opportunity to be at Sydney, Australia for DjangoCon AU and PyCon AU. This was very special, because it was my first International trip, and the first ever time when I was attending a Python conference outside India.

I was excited and anxious at the same time. Excited to meet new people, excited to be at a new place, excited to see how other PyCon takes place and preferably get some good parts about organizing a conference back to India for PyCon India :) I was anxious as it was a solo trip, I was alone with that impostor-syndrome kicking in. "Will I be able to speak?" -- But then I decided that I will share whatever I've learned.

Even before the conference began, I got an opportunity to spend some time with Markus Holtermann (Django core-dev). We roamed around Sydney Opera House and met Ian, Dom, Lilly and later went to Dinner.

PyCon AU with Nick Coghlan, Dom, Lilly, Markus Holtermann, Ian Foote, Andrew Godwin

I'm bad at remembering names! And when I say this, I mean super-bad. But to my astonishment, I was able to remember names for almost everyone whom I had an opportunity to interact with.

I registered as a volunteer for PyConAU which in-turn gave me a lot of perspective on how PyCon AU manages different aspects of logistics, food, video recording, speaker management, volunteer management, etc. There were certain moments when I was like "Oh, we could've done like this in PyCon India! We never thought about this!" and Jack Skinner was really helpful in discussing how they organize different things at PyCon AU.

My talk was on August 24, 2018, and it went pretty well.

You can see the slides of my talk below or by clicking here

Here is the video:

During the sprints, I met my CPython mentor Nick! Nick was the one who helped me in starting with CPython during PyCon Pune sprints.

I never had an opportunity to try my hands on hardware in my life and seeing so many hardware sprinters, I was curious to start playing with some of the hardware.

During the two days of sprint, I was able to fix my CPython patches, land a few PRs to Hypothesis which is a testing tool and play with Tomu to use it as a 2FA device.

Throughout the sprints, I met many people and yet got so much work done which left me with an astonishment. (I really wish I could be that productive daily :) )

Overall, it was a really pleasant experience and I prepared a list of notes of my PyCon AU takeaways which I shared with PyCon India team.

We had a grand 10th-anniversary celebration and first-time ever we had a Jobs board in PyCon India, along with various other things :)

I want to thank all the organizers, volunteers and attendees of PyCon AU for all the efforts to make the conference so welcoming and inclusive for everyone.

-- Your friend from India :)

by Sanyam Khurana at September 03, 2018 06:57 AM