2013, a retrospective

It has been an interesting year in ways that I did not anticipate. Looking back, I’d like to recount a few things so that I don’t forget the experiences that have dramatically altered my worldview, hopefully for the better. I’d like to remember these fleeting moments, as they’re too precious to be lost. Here they are, in no particular order.

  • It feels good to walk for the first time without support after an extended period on a hospital bed. The first unsure steps, like a child, are both exhilarating and scary. The slow steps, the deep breaths, and victory. The blessings of human mobility.
  • The seconds before general anesthesia. Unsure about what’s going to happen. Succumbing to the uncertainty. General sense of well being, even though NOT. Numbness traveling up the leg, starting at the fingertips. Fluttering of eyelids, coldness, and out.
  • Waking up thinking “Made It”, on more than one occasion. Colorful and vivid morphine-induced dreams.
  • Drinking water. Never did it taste so good. Thinking “why I didn’t I enjoy this more?”
  • Feeling satisfied and carefree when the last drain tube is out. Going for another walk without the chains and shackles this time, beaming and happy.
  • Taking bad news with a “crap, in a bit of a pickle”. Wishing there weren’t so many people around me. Thankful there weren’t some people around me.
  • Taking good news with a “hmm, that’s great”. Thinking “what’s next”, and where to go for lunch.
  • Waiting expectantly for the visiting hours and seeing Wathsala walk in at the strike of the clock. All is well.
  • Sleeping to the sound of a waterfall. My neighbor’s snoring and sleep-talk required me to explore this option. It worked out well.
  • Sleeping in my own bed and thinking how low-tech it is. The light streaming through an open window and a gentle breeze. It’s 11am on a Tuesday and I’m in bed and not at work.
  • Being breathless after a trip across the room.
  • Doing breathing exercises using a contraption that made me want to keep bettering myself to impress the nurses. Wathsala knew what was going on and was in silent support of it. Or so I presume.
  • The real beauty of loving and caring human beings. Honestly, there’s no bigger service than nursing someone to health.
  • Observing the activities of the Vietnamese drug lord and his two mistresses in an adjoining bed. His hefian mannerisms and attire intrigued the hell out of me. Didn’t see him after he was wheeled out for surgery. I figured he requested for a different bed. Wonder why.
  • Reading “Ape Gama” by Martin Wickramasinghe after many many years and thinking, “that is just beautiful”.
  • Visits from old friends.
  • Wearing the sarong like a boss. Proudly brandishing the national attire on the many trips abroad and vowing to stick with it for good. More “why I didn’t do this before?”.
  • Walking into the hospital like I owned the place. Being recognized. Probably as the guy who visits Mount Elizabeth wearing a sarong. Proud to be that guy.
  • Visits to the temple. More, “why didn’t I do this before?”
  • Hearing about those who were praying for my recovery from other people. Some who I had not even met, until just today.
  • Feeling grateful for my A team of Poh-Koh-Tan for pulling me out of a mess.
  • Dr. Liang banging his head on the table when he found out I was flying out the next day. He wanted more time to work with the “interesting case”. I granted him his wish.
  • Hearing old voices on the phone unexpectedly.
  • Hearing the sound of the crows outside the General Hospital in the morning. Inspiration shows up in unexpected places.
  • Being sick of soup. To this day.
  • Stories of talking dogs and cats and elaborate back-stories for doing what they did.
  • Shaking my cousins hand in the recovery room as I drifted in and out of sleep.
  • Waiting for the first rays of sunlight after a sleepless night.
  • Experiencing pain, and knowing it will pass. And it did.
  • Phone calls from my friends, following my every step of the way and helping me on.

It’s been an interesting year and I hope 2014 would be an interesting one too, and if all goes according to plan, it will. Stay tuned.


Software Freedom Day 2013 @ Virtusa

This last week, the fine folk at the Virtusa Open Source SIG organized an event to celebrate the Software Freedom Day where my good friends Mifan and Suchetha made keynotes. Also in attendance was Arunan, so it was a re-union of sorts with some old friends. It’s been a while since I have participated in anything open source / free software, and it was great to see the old flame is still alive at Virtusa, and I hope it helps in shaping their worldviews and brings as much purpose to them as it did to me more than 10 years ago.

The last SFD I attended was in 2008. I blogged about it here with some photos available in my surprisingly-still-around flickr account. It was in Chinatown in Boston and I drove up from Pennsylvania, mostly to get my mind off things. It was there that I purchased a copy of “Free Software, Free Society”, a collection of essays by Richard Stallman. Five years later, I picked up the dusty book from my shelf and re-read the GNU manifesto, to get my mind back to the core principles.


Source: http://www.flickr.com/photos/aweeraman/sets/72157600555500824/

It was there, as I was flipping through the pages, that I saw RMS in a whole new light. His uncompromising tenacity in the face of control and oppression and unfaltering stance on ethics and morality of freedom. He is a true freedom fighter. His message sometimes gets lost in all the pandemonium we go through daily but the spirit of the freedom he preached is very much alive every time we believe that knowledge should be free and that everybody should have access to it. I hope this message continues to inspire folks for years to come.

Google App Engine + APNS

Earlier this month, Google App Engine released support for outbound sockets and I figured that a Saturday spent mucking around with AppEngine to see if I could get it to work with APNS would be time well spent. In the sandboxed world of GAE, lack out outbound socket support meant that it was not possible to communicate with external services by opening a socket, which is what the Apple Push Notification Service (APNS) required. So for a long time, it was not possible to use the AppEngine to build an APNS provider, but now you can. Services like Urban Airship expose this capability in a way that can be consumed through a RESTful service, which works with GAE using UrlFetch, but the focus of this post is to communicate with APNS directly. There are some caveats though. Billing needs to be enabled, although the free tier should be sufficient for playing around, and there’s also the matter of the daily quota.

Here’s a whirlwind tour of getting yourself up and running on APNS with Google AppEngine.

1 – Fun with certificates and keys

Apple makes the job of working with APNS quite a fun and intellectually stimulating experience, if you have nothing else to do on a Saturday. You may also notice a couple of new gray hairs once you’re done, but at the same time, there is an elegance to the architecture that must be acknowledged, even though its painful to setup.

Generate a new certificate signing request
Fire up the mac Keychain Access tool and request a certificate from a certificate authority.
Request a certificate from a CA
In the resulting dialog, enter your email address an identifiable string in the common name field. Also, select the “Saved to disk” option, since we need to upload it later to the provisioning portal.
Certificate assistant
Once you’re done with this, you should have a Certificate Signing Request (CSR) in your file system.

Create a new App Id
Now head over to the Apple developer site, log in with your developer credentials and navigate to the iOS Dev Center, where you should see a link to “Certificates, Identifiers and Profiles” as shown below.
iOS Developer Program
First, create a new App Id, by navigating to that section:
New App Id
In, the add screen, enter any description and select the “Push Notifications” check box:
Push notifications
Also, in the bundle ID section, remember to include an explicit fully qualified bundle Id in the reverse domain notation, as wild-cards are not supported for push notifications:
Bundle Id

Create a new push certificate
Now, navigate to the certificates section, and create a new one. During creation, select the combo box as indicated below:
Development certificate
Next, select the app Id created earlier and when prompted, upload the Certificate Signing Request created earlier. If all goes well, the certificate will be generated. Download this certificate, and double click it to open it in the KeyChain tool. You would see the private key with the common name that you entered earlier when you expand the certificate. Remember to note that the certificate name is prefixed with “Apple Development IOS Push Services”. Select both the certificate and the key, right click and “Export 2 items”. It will prompt you to enter the KeyChain password and will generate a .p12 file that you will need later to configure the server side provider.

Generate a provisioning profile
The last step in this process is to generate a provisioning profile so that you can deploy the app on to the device. In the devices section of the portal, create a new device and enter the 40-character device Id you get from iTunes or the Xcode Organizer. Head over to the Provisioning Profiles section and create a new profile. Remember to select “iOS App Development as shown below:
Provisioning profile
In the next screens, select the App Id, device and certificate created in the previous steps to create the provisioning profile. Download the profile and drag it onto the profiles section of the Xcode organizer.

Now the painful part is done. Time to do some real work.

2 – Create the web service

A pre-requisite for this tutorial is Google App Engine, and getting a service up and running on it. If you haven’t done that before, follow the steps outlined in the getting started page and it should give you a good idea on how to work on this platform. It comes with good Eclipse integration so it should be a snap to get setup.

The framework I’ve used for APNS is java-apns which provides a simple API to APNS. Here’s all of the code I used to build out the simple service, this could be done in a simple servlet or a RESTful service on a JAX-RS implementation like Jersey for example:

InputStream inputStream = context

ApnsService service = APNS.newService()
.withCert(inputStream, "password").withSandboxDestination()

String payload = APNS.newPayload().alertBody(message).badge(1).build();

ApnsNotification notification = service.push(token, payload);

A couple of things to note, the .p12 file exported from the Keychain needs to be included in the war file (preferably in the WEB-INF directory to prevent public access) and password protected at export time. Also, it’s important to add the “withNoErrorDetection()” method as shown above as it would otherwise try to spawn threads to detect errors and would not run in the GAE environment since thread creation is restricted. The input into this web service is a 40-character token that is received from the device, and the message that is to be sent.

At this point, the server side work is done. Let’s move over to the client.

3 – Create the iOS client

For the purpose of demonstration and testing, I’ve created a simple single view application with the bundle ID specified in the provisioning profile.

The key methods you would need to implement in the AppDelegate would be:


1) -application:didFinishLaunchingWithOptions:
This method gets invoked when the application finishes launching either directly or when launched through a push notification. In the case of the latter, the details of the push notification are passed in through a dictionary object so that it can be dealt with. Here’s the code to register for push notification alerts:

[[ UIApplication sharedApplication] registerForRemoteNotificationTypes:UIRemoteNotificationTypeAlert | UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound];

2) -application:didRegisterForRemoteNotificationsWithDeviceToken
This method gets invoked with the device token received from APNS. This token uniquely identifies the device and is not the same as the UDID. The token needs to be sent to the web service so that it can pass it on to the APNS and have messages sent back to this device. This token includes some special characters and spaces which needs to be removed as shown below:

NSString *token = [ deviceToken description ];
token = [ token stringByTrimmingCharactersInSet:[ NSCharacterSet characterSetWithCharactersInString:@"<>"]];
token = [ token stringByReplacingOccurrencesOfString:@" " withString:@"" ];

3) -application:didFailToRegisterForRemoteNotificationsWithError
This method gets invoked if there’s some error in registering for remote notifications which causes the push token to be not available for the app.

4) -application:didReceiveRemoteNotification
This method can be used to trap an incoming message while in the app, and take some action. In this case it just shows it in an alert view.

UIAlertView *alertView = [[ UIAlertView alloc ] initWithTitle:@"Push Alert" message:userInfo[@"aps"][@"alert"] delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil];
[ alertView show ];

To test this capability, I’ve built a test app that takes input text from a text field and sends it to the web service created in GAE. The resulting push notification is trapped and displayed in an alert view as shown in the sample code above.


Finally, a couple of things to keep in mind when developing apps that use push notifications:

  • It’s inherently unreliable, do not use it for transferring any critical information
  • While the transport is secured through TLS, it’s still advisable not to use APNS for company confidential information
  • Do not store your certificates in an accessible location on the web server. Password protect it for additional security
  • Store the device tokens safely on the server side, or users will be very upset if its compromised
  • It’s a good practice not to update information in the push notification handler code, since it may trigger updates without the user’s knowledge

That’s all for now. Enjoy!

Git guts

Today I will dive into the guts of git to showcase the simplicity and elegance in which git manages the content internally in it’s own content addressable file system. Armed with this knowledge, you will be able to get a deeper understanding of the underlying data structure to help you figure out and troubleshoot issues that may inevitably come up as you use git.

To start, I shall create a new directory and initialize git.

$ mkdir git-guts
$ cd git-guts
$ ls -a
. ..
$ git init
Initialized empty Git repository in /Users/anuradha/dev/workbench/git-guts/.git/
$ ls -a
. .. .git

At this point, there are no files under version control yet. Here are the files that have been created during initialization:


Of these, the hooks are boilerplate and none are yet active. To make them active, they need to be renamed to remove the .sample suffix.

In this post, I shall focus on the .git/objects directory, as that is where all the content is stored as hashed “objects”. To show what happens, let’s add a file to source control and observe the changes:

$ echo "bar" > foo
$ git add foo
$ git commit -m "initial commit"
[master (root-commit) 64f3e97] initial commit
1 files changed, 1 insertions(+), 0 deletions(-)
create mode 100644 foo
$ find .git/objects/ -type f

Adding a single file to the repository caused the creation of three objects. Each object is uniquely identified by a 40-character SHA-1 hash of its content, which brings us to one of the key aspects of git, which is that it’s nearly impossible to alter the contents of any single file without causing a change to the cryptographic hash, and unlike version control systems that pre-date this approach of cryptographically ascertaining the integrity of the content, it’s quite hard to tamper with the file or maliciously change history. This coupled with the ability to sign tags using a private key adds an additional level of authenticity and non-repudiation to the release process.

Let’s analyze the three types of objects. To see the type of object, the git cat-file -t HASH command can be used. It shows that the three types of objects are:

  • blob
  • commit
  • tree

To see the contents of each file, the git cat-file -p HASH command can be used as shown below:

$ git cat-file -p 5716ca5987cbf97d6bb54920bea6adde242d87e6

This is the first of the three objects, which is the “blob”. It is the actual contents of the file. Note that the file is addressable using the hash, making this structure a content-addressable filesystem. But you may wonder, how does git know what the file name is? This object is only named by the hash. I will get to that shortly.

Let’s look at the next object.

$ git cat-file -p 64f3e9762509b0ce9cbb252f69847957e5368632
tree 6a09c59ce8eb1b5b4f89450103e67ff9b3a3b1ae
author Anuradha Weeraman 1358159197 +0530
committer Anuradha Weeraman 1358159197 +0530

initial commit

This is the “commit” object, which is also stored as an object in the file system. Note that there are two fields for the author and the committer, since the two can be different individuals in the case of a large distributed development project. This way original contributions are acknowledged and not lost during the merging and contribution incorporation process. This file also has a hash reference to the commit “tree”. Let’s look at the tree object next.

$ git cat-file -p 6a09c59ce8eb1b5b4f89450103e67ff9b3a3b1ae
100644 blob 5716ca5987cbf97d6bb54920bea6adde242d87e6 foo

This is the last of the three objects, which is the “tree” object. It contains a descriptor of all the files that are part of the commit. It does that by taking the information from the staging area / index and creating an object at the time of the commit. It shows the permissions of the file in a somewhat different format to the standard UNIX file permissions; the last three digits tells you what the permissions of the file was at the time it was committed. The line also indicates the hash of the blob followed by the name of the file. This is how git knows what the blob should be called in the file system when the code is checked out.

Let’s also take a look at what the HEAD of the tree is pointing to:

$ cat .git/HEAD
ref: refs/heads/master
$ cat .git/refs/heads/master

It now has a reference to the last “commit” object. So when you clone or pull down master, git knows what the last commit was introduced into the repository.

All I’ve described so far was a single commit. How does git keep track of the history and the commit graph based on this structure, you might wonder. Let’s make a change to the foo file and commit it.

$ echo foo > foo
$ git add foo
$ git commit -m "Second commit"
[master 2c8200f] Second commit
1 files changed, 1 insertions(+), 1 deletions(-)
$ find .git/objects -type f


There are three new objects in the system now, a new blob, a tree, and a commit. The blob and tree objects are similar to the ones discussed earlier, but there’s a change to the commit object:

$ git cat-file -p 2c8200f75860bede9aaa0c156c133d15fa418bd5
tree 205f6b799e7d5c2524468ca006a0131aa57ecce7
parent 64f3e9762509b0ce9cbb252f69847957e5368632
author Anuradha Weeraman 1358161997 +0530
committer Anuradha Weeraman 1358161997 +0530

Second commit

It references the parent commit. This way the entire commit graph can be traversed and mapped using these commit objects. The .git/refs/heads/master file is updated to refer to the latest commit. git reflog is a very useful tool which shows the updates to the HEADs over time and can be used to diagnose issues which you might otherwise consider unrecoverable. Git is very protective of data so it’s actually quite hard to lose data, unless you manually trash the object repository. In most occasions, it may turn out to be a dangling unreferenced commit which you can track down using git reflog and recover it. Here’s a post that explains this process for those who are interested.

Now, to make things a little more interesting and to create some awareness of what the git utilities are doing behind the scenes to make our lives easy, let’s create these objects manually using a few low level commands with the help of this new knowledge that we just acquired. For the purpose of this exercise, I will create a brand new repository and initialize git.

Let’s create the blob object for the file “foo” with the content “bar” as in the original example:

$ echo bar | git hash-object -w --stdin

The -w switch tells git to write the object to the repository, and --stdin instructs it to read the contents from standard input. It then outputs the hash of the object that it just created.

Let’s look at the repository to see if it really was created:

$ find .git/objects -type f

So far git has been telling us the truth.

Now, let’s create a tree object. Since git relies on the index, or the staging area in order to determine the contents of the tree, we will use the git update-index command to set things up in the staging area. Note that the current directory is still empty, there is no “foo” file in the current directory. It’s only available as a hashed object inside .git, and still .git doesn’t know it’s called “foo”. To update the staging area to write the tree object:

$ git update-index --add --cacheinfo 100644 5716ca5987cbf97d6bb54920bea6adde242d87e6 foo

This is equivalent to performing git add foo. Now git knows the file name of the object, but the tree object is not yet written to the object repository. To do that:

$ git write-tree

This writes the tree object, and returns its hash. Let’s look at the file system again:

$ find .git/objects -type f
$ git cat-file -p 6a09c59ce8eb1b5b4f89450103e67ff9b3a3b1ae
100644 blob 5716ca5987cbf97d6bb54920bea6adde242d87e6 foo

Still, the repository does not contain a “foo” file. Right now these objects are dangling, as there’s no commit object referencing them. It’s not possible to checkout a copy of the foo file yet. Let’s create the commit object now:

$ echo "initial commit" | git commit-tree 6a09c5

The short hash of the tree object and optionally and preceding commits are passed in as arguments to the git commit-tree command which returns the hash of the commit object. At this point the repository still has no idea what the last commit was, so performing the git log command would result in an error:

$ git log
fatal: bad default revision 'HEAD'

To fix this:

$ echo c3352776341945bcdddd400d3765635bb2be5671 > .git/refs/heads/master

Let’s look at the log again:

$ git log
commit c3352776341945bcdddd400d3765635bb2be5671
Author: Anuradha Weeraman
Date: Mon Jan 14 18:06:51 2013 +0530

initial commit

There you have it. Git now recognizes your last commit.

If you now list the directory where you initialized the git repository, you would not notice any files, since all these objects were created directly in the git object repository. Now that we have created the commit object and the log shows the last commit, we’re able to load the file into the directory to create a working copy. The way we do that is by resetting the contents of the repository to the HEAD which points at the latest commit.

To illustrate this more clearly:

$ ls -a
. .. .git (empty directory)
$ git reset --hard
HEAD is now at c335277 initial commit
$ ls -a
. .. .git foo
$ cat foo

and Voila.

Hope this helps, and you now have a better understanding of the git guts.

A boy’s first computer

The week so far has been an eventful one. Being bed-ridden has made me pensive and nostalgic about my childhood, and long for the simpler days. I was specifically dwelling on the subject of interpreters and compilers which took me back to when I was nine, when I asked my uncle, who I considered as the pre-eminent guru in all things computers at the time, how to compile a .bat file.

At that time I was given an old 286 to play with. And when we moved around, so did my computer. Every week I flip open the large case case using a convenient latch on the two sides and peek in. I was enamored by the machine and eventually learnt what some of its parts were, and wondered how they worked. I sought books and the help of my uncle to learn about it. I once spent a weekend at my uncle’s where he showed me the difference between dir /p and dir /w, and told me to try it out for about twenty minutes while he went to speak to someone. He taught me the basics of DOS which I was usually very eager to try out on my own machine.

After I was done peeking, I usually close the box up and meticulously clean it. It was kept in perfect condition next to my work desk on a blue color custom built table for a computer which had a pull out keyboard and a place to keep a printer as well as some shelves below for various things. It was pretty big by today’s standards, but then everything was so. The computer case was about 2.5′ x 2.5′ x 10″. It was big, I couldn’t carry it by myself but I made sure I packed it safely during my vacation trips.

It also featured a 4 MB hard drive, 1 MB of RAM, a 14″ monochrome monitor and a 5.25″ floppy disk drive. There was a lot of trial and error to figure it out and spent many late nights trying to understand DOS, WordStar, WordPerfect, DBase III+, BASIC, Lotus 123. The command line baffled me, and piqued my interest, and I learnt to love the blinking cursor on the green screen waiting for the next command to be input.

It was a used machine at the time, so it came with some customizations and funky DOS shell like interface which was navigable through function keys, but also let you escape into the shell. I spent a lot of time trying to figure out how it worked and how to modify it. It also came with a couple of games which I still fondly remember: digger, pacman and paratrooper. I played very little games after that. I recently tracked this down to skill-envy (a pseudo psychology construct that I just coined), as playing games made me wonder too much how it was constructed, and not having the skills to build a similar game myself made me envious of the game author and the knowledge of the black art he possessed. Hence I preferred to stay away from games. I know, it’s childish, but in my defense I was a child.

Part of this black art was machine language. Printing the contents of .exe files showed a series of unintelligible characters and yet the only executable programs I could create at the time were plain text and readable .bat file. That was when I asked my uncle how I could convert a .bat file to the .exe file which I viewed as being inherently superior due to its mysterious nature. Knowing what I was trying to get at, he suggested I learn QuickBasic. I only had GWBasic installed on the computer. I came to realize that the syntax of QuickBasic was more or less the same, minus the explicit line numbers, so I taught myself GWBasic on my 286. Later I got a copy of QuickBasic and lo and behold, there was an option to compile programs into the mysterious .exe file format that I can directly execute from the command line. This revelation was a turning point for me and I was hooked on QB.

Having outgrown the 286 I pestered my father to purchase a newer computer, and this time a 66MHZ 486 DX2 with the “turbo” button. If turbo was turned off the computer ran slower, which baffled me. The computer also featured a 40 MB hard drive. That should last forever, I thought at the time.