Building Android from source

Building Android from source seems like a daunting task at first, but it’s really not that hard. I’ll walk through the steps as simply as possible to get your first AOSP build running on your handset. This is a rite of passage for many, and guaranteed to give insights on the inner workings of Android. First, let’s do a few push-ups and jumping jacks to mentally and physically prepare for the journey, and at the end of it, you’d wonder why you even had to do that.

What you need:

  • A 64-bit environment
  • ~150 GB of disk space
  • A working Linux installation (I use Debian, but take your pick). I suggest going with bare metal over virtual machines in the interest of sanity and general well being
  • A decent broadband connection (to download a significant portion of the internet)
  • Some time and patience
  • A healthy belief in the supernatural, and their inevitable involvement in the building of large complex codebases

Everything you need to know about the process is right here. Feel free to skimp through these docs first to get an overview of the entire process. What’s below is mostly a summary + a few things not explicitly mentioned that would stump the newbies.

Step 1

Prep your OS and install any dependencies. At the minimum you’re going to need java, python, C/C++, make, and git so make sure they’re installed using your favorite package manager. If you’re on Debian/Ubuntu, you’d run something similar to below:

sudo apt-get install openjdk-8-jdk git-core gnupg flex bison gperf build-essential zip curl zlib1g-dev gcc-multilib g++-multilib libc6-dev-i386 lib32ncurses5-dev x11proto-core-dev libx11-dev lib32z-dev ccache libgl1-mesa-dev libxml2-utils xsltproc unzip

If you find that you need something else down the line that’s not mentioned above, be a man/woman, and just apt-get it. Also, your package names may vary depending on your choice of distribution.

Run the commands manually to do a quick sanity test that things are installed properly.

Step 2

Linux may require some configuration to udev to allow non-root users to work with USB devices, which is going to be required later on, so execute this command:

wget -S -O - | sed "s/<username>/$USER/" | sudo tee >/dev/null /etc/udev/rules.d/51-android.rules; sudo udevadm control --reload-rules

Step 3 (Optional)

Setup ccache to get extract some performance out of the build, by putting the following into your .bashrc:

export USE_CCACHE=1
export CCACHE_DIR=/path/to/empty/ccache/directory/where/cache/is/kept

Set the cache size to 50G using the following:

prebuilts/misc/linux-x86/ccache/ccache -M 50G

Step 4

The Android Open Source Project is a behemoth of a code base with multiple third party open source components and frameworks included such that it required its own layer on top of git just to manage the dependencies. That’s what repo is. On the positive side, it’s quite easy to get it working.

Download the repo tool to your ~/bin directory as follows:

$ curl > ~/bin/repo
$ chmod a+x ~/bin/repo

Make sure that ~/bin is in your $PATH. Type repo a few times and watch it barf in your face while saying:

error: repo is not installed.  Use "repo init" to install it here.

Do not fret, this is normal.

You see, you have to initialize repo and give it a manifest which will tell repo where to download the source from and help it initialize itself. You do this by navigating to an empty directory and executing:

repo init -u

If you do not specify a branch name with -b parameter, it will fetch the master branch, which is what we’re going to do here.

You should now see an empty directory, apart from a .repo with the metadata that it downloaded from the manifest file. Now I’m assuming that you’re probably at home and do not have complicated proxy arrangements to get to the internet so I’m going to avoid talking about the HTTP_PROXY environment variable that you have to set if you do, but I guess it’s all redundant now that I’ve already said it, so I’m just going to move on:

repo sync

This will download the internet. This will run for an inordinate amount of time. A few things you can do while this executes:

  • knit
  • write a book
  • have kids
  • watch the poles melt

A rough approximation, and the internet is very divided on this topic, is that it will require anywhere from 12 to 15 GB which will automatically expand to around 34GB on disk after it’s downloaded. In my experience, I’ve only been awake till about the 13GB mark, so didn’t quite get to see the transformation to its final glory. Add 50GB for the ccache and with some room to spare and you see why you need a lot of disk space to go through this.

Now you wait.

Step 5

In my case, I needed to run the build on my Nexus 5. The AOSP source tree doesn’t have everything you need to build images specifically for Nexus 5, so I had to go to this link, scroll down to Nexus 5 and download the Broadcom, LG and Qualcomm binary blobs for the hardware on the Nexus 5. Put them in the root of the Android source tree, and execute it, where it self extracts. This is a needed step if you want to run the image on the device later on.

Step 6

Now comes the compilation step. This is actually the easiest part.

Initialize the build environment by sourcing a script, you can use the bash source command or the good old dot command – whatever strikes your fancy:

. build/

This will inject a bunch of build related environment variables to your current shell. Note that you have to run this in every shell that you want to run a build from.



This will give a list of targets and allow you to select one. In my case, I selected “aosp_hammerhead-userdebug”. Hammerhead being the code name for the Nexus 5.

One more step, and that’s to start the build.

time make -j4

You could easily just say “make”, but I would like to know how long the build took when it eventually finished running, and with the -j4 flag indicate the concurrency level for make (rule of thumb: 2 x number of cores). Now you can go for lunch.

Things to do while this runs:

  • Read Game of Thrones (all the books)
  • Have a fabulous mid-life crisis
  • Watch Lawrence of Arabia

To be fair, it’s not that bad, just a few hours depending on your setup.

Step 7

Once the building is done, you will have a bunch of files under out/target/product/{device} which you can now start flashing.

Connect your Android phone to the computer and assuming that all the drivers and Android SDK is setup (a dependency that I somehow failed to mention before), you should be able to run the following command:

adb reboot bootloader

This can also be achieved by shutting down the device and starting it while pressing a combination of buttons (such as volume down + power on the Nexus 5). This would put you into fastboot mode. On the fastboot screen, pay attention to whether the bootloader is locked or not, if it is, execute the following to unlock it:

fastboot oem unlock

This would wipe the data on the device. To further clean things up, execute the following:

fastboot format cache
fastboot format userdata

Step 8

Navigate once again to the out/target/product/{device} directory and execute the following to flash the built images to the device:

fastboot flash boot boot.img
fastboot flash system system.img
fastboot flash userdata userdata.img
fastboot flash recovery recovery.img

Reboot the device and you’re all set.

This is just the tip of the iceberg, and hopefully you’ll be able to now play with the internals of Android to understand how things are really stitched together. Good luck on your journey.


On key signing and trust

Key signing is a hallowed tradition in the open source world with a very specific protocol for validating and confirming an identity before accepting someone to the web of trust. It’s almost never done without meeting the person being admitted into the trust relationship and it goes like this:

  1. Individuals meet for a beer, or at a key signing party (for those who just went wtf, yes, these things are real, and they are crazy fun! see below for the type of shenanigans that take place at these reality-altering parties)
  2. They exchange strips of paper or business cards with their name, email address, key fingerprint and key ID
  3. They validate each other’s identity using Government issued photo IDs
  4. Once cleared, they pull down each other’s key from the key servers
  5. They validate that the fingerprint of the downloaded key matches what’s written on the piece of paper and the photo IDs exchanged at introduction
  6. If everything checks out, they sign each other’s key
  7. For additional security, the signed key is encrypted using the public key of the recipient and emailed to the address indicated in the key

Let’s look at this unnerving and highly nerdy exchange that has replaced the “Hi, I’m Tom” with “Hi, I’m Tom and here’s my fingerprint and Government issued photo Id”. Here’s the rationale for some of the steps in this workflow.

The key is a personal identification and privacy instrument that is backed by strong science to assure non-repudiation. I will not go into the science in this post, but here’s where you may want to get started if you’re curious. An aspect about this workflow is that nothing is trusted until verified and the protocol is there to make sure that no compromise takes place.

At the beginning of the process, the public key is expected to be on a public key server network (such as and the meeting in person is to make sure that the key you’re signing (which is on a public system) belongs to the correct individual and not to an individual (or three letter agency) who’s masquerading as someone else. The most secure way to ensure that is in person (because we’re a paranoid bunch), as that will eliminate any chance of a malicious man in the middle. When one produces the piece of paper with the key fingerprint (again backed by strong science) the signer is able to confirm by comparing the fingerprint on the public server with the fingerprint that’s presented in person along with the official photo id, that the public key really belongs to the individual before him/her. The connection has now been made and technology has once again prevailed in mathematically assured validation of another’s identity. The party is just getting started.

Once identity is validated this way, the signer signs the key and uploads the key back to the key server or emails a copy of it. This can be done after the party in a more subdued setting without crazy paper shuffling and photo id validation madness. The astute and more paranoid amongst us, would encrypt it using the public key of the key being signed and email it to the address specified in the key because that’s a good way to validate the email address is correct and belongs to the right user. For the gnupg commands that make this workflow possible, check out the Debian Key signing howto.

In communities such as Debian, this process is mandatory to assure the trust in a system that is largely de-centralized. The Web of Trust that this creates gives rise to a truly magnificent network, which is difficult to subvert so long as the protocol is followed to ensure no compromise.

While this is good for cryptographically assured validation of one’s identity in a global network and non-repudiation of one’s contributions and electronic communications, trust is ultimately a very subjective attribute and probably can never be assured through a hash, because trust can be broken by people even though strong science says otherwise.

Debian packaging with git – Part 1

Recently, I’ve been taking a look at tools to version control and maintain my Debian packages in git.

Git, like mercurial, is a distributed SCM used to maintain the Linux kernel since version 2.6.12. Branching in git is very cheap, and merging is trivial and follows a decentralized model where developers can push and pull code with each other in true distributed fashion.

Of the Debian tools I’ve tried, the main contenders are:

  • gitpkg
  • git-buildpackage

gitpkg, is extremely simple to use, and quick to get you off the ground. If you already have an entire revision history of .dsc, .diff.gz files, maintained in the good old fashioned way, gitpkg provides the git-debimport utility that lets you import everything into a git repository in one go. It does this intelligently by applying consecutive revisions and tagging the different versions along the way.

Here are the steps that I took to import my ncc package history into git:

$ mkdir /git/repo
$ cd /git/repo
$ git-debimport ../../debian/ncc/ncc

It will then pick up all the files beginning with ncc in the ../../debian/ncc directory.

To see the revision history:

$ cd ncc
$ gitk

The .orig.tar.gz files will be extracted to a branch called upstream and tagged (v2.5), it will then be merged to master and the .diff.gz files applied sequentially and tagged at each step (v2.5-1). A list of tags like the following will be created:


To create the source package for a particular revision:

gitpkg v2.6-1 v2.6

Where the first argument is the tag with the Debian changes, and the second argument is the tag for upstream code.

$ ls -F ../deb-packages/ncc/
ncc-2.6/ ncc_2.6-1.diff.gz ncc_2.6-1.dsc ncc_2.6.orig.tar.gz
$ cd ../deb-packages/ncc/ncc-2.6
$ debuild

These steps didn’t work for me the first time as git silently dropped an empty directory that caused the build to fail. The workaround was to create a .gitignore file in the empty directory. This is probably something that gitpkg could be configured to handle.

To upgrade to a new upstream version:

$ git checkout upstream
$ rm -rf *
$ tar zxvf /path/to/new/upstream.tar.gz
$ git add .
$ git commit -m
$ git tag v2.7

# Merge the Debian changes
$ git checkout master
$ git branch debian
$ git checkout debian
$ git merge upstream

# Fix conflicts / make Debian specific changes / test

$ git add .
$ git commit -a
$ git checkout master
$ git merge debian
$ git tag v2.7-1

# Generate Debian package artifacts
$ gitpkg v2.7-1 v2.7

# Build Debian package
$ cd ../deb-packages/ncc/ncc-2.7
$ debuild

A couple of caveats – gitpkg generates the orig.tar.gz by tarring up the upstream branch. As far as I know, gitpkg does not yet have pristine-tar support yet and that’s a must-have if you don’t want to re-package the upstream sources. I also noticed that git-debimport had trouble with absolute paths, for which I submitted a patch – so I don’t have a whole lot of confidence in the tool yet, but it’s getting there.

On OpenSolaris

In a recent article, Ted T’so makes some interesting points on Sun’s motives behind OpenSolaris, and how it fares today in the FOSS ecosystem as a result.

“Fundamentally, Open Solaris has been released under a Open Source license, but it is not an Open Source development community.”

It’s quite sad that this is the case simply considering the enormous potential that OpenSolaris had back in 2005, and the opportunities for cross pollination with Linux had the licenses been compatible. Given some of killer features of the operating system, it’s quite a shame that it has not been able to rally the developer community that it deserves.

At this point, I think the only hope for OpenSolaris is GPLv3 and a truly open development process. Then for once, Linus’ kernel will have a strong contender and a raised bar on licensing grounds.

Nexenta (a project unaffiliated with Sun), and essentially a Debian distribution with an OpenSolaris kernel, has been a strong attempt at attracting developers. Debian is by far is the most developer friendly GNU/Linux distribution out there, with a mature and proven development model, and to build an OpenSolaris distribution with user land tools of Debian makes the most sense.

I’ve been a Solaris user since version 6, which I attempted to run (quite foolishly) on a 333MHz Pentium. The user experience was anything but smooth, but still ended up gaining a lot of respect for the platform. Only time can say whether the tide changes for OpenSolaris or whether it ends up relegating to the Minix boat.

Updated: 03 May – Corrections on Nexenta

Marmot ’08

So, there’s a thread on the Debian mailing lists on a mascot for the project. Someone jokingly suggested a Marmot.

Marmots typically live in burrows, and hibernate there through the winter. Most marmots are highly social, and use loud whistles to communicate with one another, especially when alarmed.

Now if that’s not a reason why the Marmot shouldn’t be the Debian mascot…

And just look at this froody Marmot. He’s the complete embodiment of all that is free and open.

And here’s yet another blissfully content Marmot, chilling out. These dudes are cool. This post deserves a whole new category. And it’s called…

wait.. for… it…


My vote is for the Marmot.