Setup Domain Name for Raspberry Pis, wired to Local Network

In my previous article, I have shared my experience about sharing internet connection received by my Desktop PC, to the RPis. While this setup is okay to me, there is a small matter that bugged me. Whenever I need to reach other RPi from another RPi or from my connected desktop PC (e.g. doing SSH), It requires me to remember & type in the target machine’s IP Address. I want to improve this experience further by just typing the hostname of the target machine such as ssh pi@rylai instead of ssh pi@

I understand that in order to achieve this, I would need to setup & run Domain Name System (DNS) servers in my RPis LAN. But, it’s another “I know ( what I should do), but I don’t know (the technical detail of how to do it)” problem to me. After I spent a couple of days searching references which fit to my case, I then bumped with this article. Someone has done it, but I’d like to prove it and see if it fits & works on my RPis LAN.

In this article, I am going to share my experience of how I setup the primary & secondary servers on my Desktop PC and my old Raspbery Pi 2 Model B, I connected to the RPis Private Local Network.

Updated the IP Addresses

I did not change any physical parts that are involved in the private network. However, in order to get this done by following the article, I changed each RPis and my desktop PC’s IP Addresses as in the following list:

List of each machine’s new IP Addresses 

Since I updated the network’s address to (netmask, these updates will allow me to connect more nodes up to 255 x 255 = 65k devices. Thus, I won’t need to feel worry with re-updating my network’s IP Addresses in the future, when I’ll add more devices into the private network.

I applied the new IP Addresses to RPis through reverting the IP Address configuration I made in dhcpcd.conf file, moved them into /etc/network/interfaces file and also changed the host’s IP Address in the /etc/hosts file.

Added a new entry which map the new IP address to the new host name.domain name 
/etc/dhcpcd.conf in each RPis were reverted into default
/etc/network/interfaces in each RPIs.

I also did similar change on my Dekstop PC’s /etc/network/interfaces and /etc/hosts files.

Changed /etc/network/interfaces file with the new IP on my PC’s LAN port (enp6s0)

I then restarted all RPis and my PC after making these changes on each of these machines, to ensure the changes took effect.

The new IP Address reported by ifconfig command, after rebooted the RPi

Setup Primary DNS Server on My Ubuntu PC

The software which runs DNS server in Linux OS is bind9. I installed bind9 and an additional software(dnsutils ) software on my Ubuntu Dekstop PC, through running this command:

sudo apt-get update && sudo apt-get install bind9 bind9utils bind9-doc dnsutils

Then, I modified bind9’s default config to ensure that it will run on IPV4 mode.

Modified /etc/default/bind9 to force bind9 runs in IPV4 mode

Next, I modified /etc/bind/named.conf.options , /etc/bind/named.conf.local files to put my Private network’s address and the new IP Addresses of each connected hosts, into bind9 ‘s configurations.

Adjusted /etc/bind/named.conf.options file
Adjusted /etc/bind/named.conf.local file

Then, I created /etc/bind/zones/db.<my private network's domain name> as the zone’s config file and /etc/bind/zones/db.<my private network's unmasked ip address's parts>  as the reversed zone’s config file. I changed these files, as shown in the following screenshots.

The Zone File
Enter a caption

After I modified & created these files, I then run named-checkconf & named-checkzone commands to ensure that the changes are valid.

Validate changes in bind9’s config files.

I also wanted to ensure my desktop PC still could access the connected RPis by specified hostname, instead of IP address later. To achieve this, I replaced /etc/resolv.conf file with a symbolic link to /run/resolvconf/resolv.conf .

Replaced /etc/resolv.conf with a link

Then I modified /etc/resolvconf/resolv.conf.d/head file, by entering nameserver entries with IP Addresses of Google’s DNS servers and my Desktop PC.

edited /etc/resolvconf/resolv.conf.d/head file

Lastly, I rebooted my Ubuntu Dekstop PC to see if all changes that I made would take effect and the Primary DNS server ran properly.

Setup Secondary DNS Server on the Raspberry Pi 2 Model B

Setup a secondary DNS is an optional path. I decided to do it anyway, just to make sure when my PC is not turned off and my clusterd RPis are still running, the RPis still could reach each other by specifying their host names. I decided to run the secondary DNS server on my Raspberry Pi 2 Model B, so that I could maximise the number of RPis 3 in the private network to be focusing on actual works that will happen in future.

Setup bind9 on my RPi 2 is quiet similar to setting it up on my Desktop PC earlier. 1st, I did SSH to the Rpi 2, ran update repository command followed by apt-get command for installing bind9, bin9utils, bind9-doc & dnsutils .

Install bin9 and utils on RPi 2

Then, I did same changes on my PC’s bind9 default config file, to enforce bind9 will run on IPV4 mode later.

Enforce bind9 runs in IPV4 mode

Next, I edited both/etc/bind/named.conf.options & /etc/bind/named.conf.local files as shown in these following screenshots:

/etc/bind/named.conf.options file of Secondary DNS server
/etc/bind/named.conf.local file of Secondary DNS server

Notice the different settings in the named.conf.local file compared to the Primary DNS’s file: the type of each zones are set as slave and within each of zones, there is masters entry which points to the Primary DNS’s IP address.

I did not create zone files for secondary DNS server. Thus, I just rebooted my Raspberry Pi 2 to see the changes that I made earlier.


Once I rebooted my desktop PC and my Raspberry Pi2, I then began conducting a number of Tests, to confirm that the Primary & Secondary DNS are working properly. For 1st Test, I did ping, dig& ssh from my Dekstop PC to one of Raspberry Pi3.

Ping & SSH other nodes by their hostnames from Primary DNS Server

As we could see, the Primary DNS server (My PC) was able to reach the RPis by their hostnames. Next, I ran dig against one of the RPis by host name, from my PC.

Run dig against one of RPi3 by hostname & domain name

We noticed more interesting info appeared from dig , such as the IP Addresses of Primary & Secondary DNS servers, beside the RPi’s full domain name and its IP Address.

2nd Test was doing same thing as in 1st Test, but this was done from within an RPi3.

Ping & SSH test from within an RPi3 to other machines
Running dig against other nodes from the RPi 3

Lastly, for the final test, I turned off bind9 service on my Desktop PC, to simulate if the Primary DNS server is downed, the Secondary DNS server will serve as the Naming server.

Disabling the bind9 on the Primary DNS server

Then I performed ping & ssh commands from within an RPi 3 to other RPi3 or 2.

ping & ssh from a RPi3 to other machines when Primary DNS is turned off

As shown in the screenshots, the secondary DNS server was successful took over the Primary DNS server’s role when it is turned off.

Next plans 

At this point, my clustered RPis can “talk” each other by specifying their host names, instead by their IP Addresses. Also, we could ensure that this feature won’t be gone when I turned off my PC (the Primary DNS server), as long as my RPi 2 B (the Secondary DNS server) is still connected & functional.

In the future, I am going to utilise these clustered ARM Small Computers for serving as my personal R&D infrastructure, supporting my backend application’s development, such as: running Jenkins build in master/slaves mode, running clustered Redis database instances, simulating HA & Load Balancing test scenarios against my Application’s Backend API, security tests, running Docker swarm master/worker containers and many more.



Sharing internet connection to Raspberry PIs, wired to Local Network

Sometime before I write this article, I bought 5 units of Raspberry Pi 3 Model B, along with : 6 pieces of UTP Cat 6 RJ-45 cables, 5 Sandisk 16GB Micro SD cards, an 8 ports TP-Link TL-SF1008D LAN Switch, and an Anker 6 ports USB Power Adapter with 6 USB OTG cables as well. I ordered those parts, to be used for building an ARM based Computers Cluster, wired altogether forming a Local Area Network.

Once I have setup Raspbian Jessie OS on each of these RPis, I then booted them, (by using my Desktop PC’s Keyboard , Mouse & HDMI monitor) and began on assigning host name & static IP on each of the RPis (through editing their /etc/hostname & /etc/hosts files). I did this because I want to remote the RPis from my own desktop PC using SSH or Remote Desktop, by specifying their static IP Address.

I wired the Rpis using the UTP Cat6 cables to the LAN Switch, and also connected my Ubuntu Desktop PC into the Switch as well. I planned the topology of the wired Network as shown in this following picture:

I did ping test from an RPi to other connected RPis, from my Desktop PC to RPis, run ssh command from my desktop PC to the RPis, and the outcome were all went well.

A problem arise when I connected my Desktop PC to internet. Either through tethering my iPhone to my PC or connecting the PC to my Wireless MiFi modem, I could get access to internet from my Desktop PC, BUT not from the RPis in the wired LAN. It seemed that my Desktop PC did not share the Internet connection it receives to my RPis.

It took me few hours to understand, how sharing internet connection from my Ubuntu PC to my RPis can be done. Through google-ing efforts and trial-error attempts, I bumped into this article & followed partial steps in the article to get this done.

In this article, I am going to share my experience about how to share the internet connection received by my Ubuntu Desktop PC to the wired Raspberry Pis via Local Area Network.

Configure DHCP Client service on each of RPis

Defining a static IP address on an RPi through writing it on /etc/hosts file , helped me remoting the RPi from my Desktop PC. However, it’s not enough to help the RPis for recognising my Desktop PC as the Gateway to Internet. Also, we need to define the address of DNS servers that the RPi should lookup to, so that when they browse to a specific URL Site, they could find it by the site’s Domain name (e.g., instead of by its IP Address (e.g.

Through experimentation, I also found that each of RPis runs DHCPCD server. It is a daemon software which helps the RPi receiving dynamic IP assigned by DHCP server. On one of my RPi, it gave me trouble: it overridden the defined Static IP, replaced it with Dynamic IP (e.g.

In order to prevent this and also `telling` RPis that they should lookup for valid DNS server and recognise my PC as Gateway to the internet, I modified the /etc/dhcpd.conf file in the RPi as in the following screenshot:

Additional changes in the red-squared lines are interpreted as follow:

  1. interface eth0 — Next configuration lines are applied to the LAN interface of the RPi, the eth0 .
  2. static ip_address — This line defines the static IP of current RPi and the subnet mask (/24)
  3. static routers — Defines the Gateway’s IP. In this case, it points to the IP Address of my Ubuntu PC.
  4. static domain_name_servers — Define IP addresses of valid DNS servers. In this case, I defined IP Addresses of Google’s DNS servers.

I saved the changes and then reboot the RPi. By the time my RPi was rebooted successfully, I ran ifconfig to confirm the IP address of eth0 interface has been assigned with the Static IP defined in /etc/dhcpd.conf earlier.

Then, I ping my ubuntu Desktop and I can confirm that the RPi still can reach out my PC.

I repeated these modification steps against the rest of RPis, except I gave different IP Address but still under same subnet address (e.g. RPi #2 =, RPi #3 =, etc).

Configure IP Tables on the Ubuntu Desktop PC

At this point, It was still half way to share the internet connection to the RPis. Then, I added Network Address Translation ( NAT ) configuration on my PC, through running thisiptables command in terminal window:

sudo iptables -t nat -A POSTROUTING -o wlx6466b309ab63 -j MASQUERADE

Then followed by running this command, to ensure that the IP packets routed from any wired RPis, will be forwarded to wlx6466b309ab63 wireless interface:

sudo iptables -A FORWARD -i enp6s0 -o wlx6466b309ab63 -j ACCEPT

Lastly, we run this iptables command to forward incoming packets from wlx6466b309ab63 interface to corresponding RPis, when the packets have an established initial request.

sudo iptables -A FORWARD -i wlx6466b309ab63 -o enp6s0 -m state --state RELATED,ESTABLISHED -j ACCEPT

To confirm that the changes were applied, I browsed a web page from any of the connected RPi and I confirmed that the web page is loaded on my RPi’s Chromium.

Confirming that the Internet is shared to Rpi

Persist the Changes on IP Tables, permanently

In this moment, I have configured NAT & packets forwarding rules in my Ubuntu Desktop PC for sharing the internet to my RPis. To ensure that these configuration won’t be gone at next time I restart my PC, I installed iptables-persistent tool (run sudo apt-get install iptables-persistent for installing it) and then I ran sudo netfilter-persistent save command in terminal window. This will ensure that the IP Tables Configuration that I has made will be persisted and reloaded at next time I restarted the PC.

Where to go from here

I felt excited, seeing my RPis are fully connected in Local Area Network and also able to access Internet. Next, I would like to setup DNS server(s) in the LAN, so that each of nodes in the LAN would have their own internal domain name, such as mydesktop.nextresearch.local Once I could setup this, I would like to setup DHCP server in the LAN, so that each of the connected parties will have Dynamic IPs.



Building Redis Docker Image for Raspberry Pi

During the weekend, the time I wrote this article, I have a plan to spend my off-time by creating a simple docker swarm project, which comprising my Desktop PC as the Swarm manager and my Raspberry Pi 2 Model B unit (attached to wireless LAN to my PC) as the worker node. The Rpi worker node is designated to run Redis server, as shown in this following picture:

Docker Swarm Sample - 1 manager (Desktop x64) with 1 Worker (ARM)

Right after initialised the Swarm and joined the Rpi as a worker node to the swarm, I then started pulling the newest version of Redis image container in my Rpi: redis:4.0.1-32bit. However, when I run a container using this image in my Rpi, it did not work. I remembered that the image is intended for x86 machine, while Rpi is an ARM based computer. I should pick the ARM version of the Redis image & I don’t see any of it on redis official docker hub page.

Then, I found some guys in Hypriot created dockerfile & script for building Redis-Rpi, although it is quiet old. I want to run the latest version of Redis on the Rpi (version 4.0.1) and it would require me to making some changes on Hypriot’s Dockerfile, before building the image: replacing Raspbian:wheezy with Raspbian:jessie and change the Redis version from 3.0.4 to 4.0.1. To make these changes, I forked Hypriot repo and will make necessary changes, under a new branch on the forked repo.

In the next lines, we will cover more detailed steps for making the latest version of Redis runs on my Rpi 2 Model B.

Making required changes on Hypriot’s Dockerfile

After I forked Hypriot’s repo, I cloned the forked repo into my PC and make this following necessary changes on the Dockerfile:

  • Replace the OS image from raspbian:wheezy to raspbian/jessie – I want the image to run the latest version of Raspbian.


  • Update the Redis’s version, download URL & Hash Code. These are taken from this site.


  • Change the last line to ensure when the container is running later, it will allow other connected machines, accessing the redis database.


  • I pushed the changed Dockerfile to the forked repository, under “redis-4.0.1” branch.


Build the Docker image on the Rpi

  • Open a new Terminal window and ssh into the Rpi.


  • Clone the forked repo.


  • Checkout the “redis-4.0.1” branch.


  • Run `docker build -t <dockerhub user id>/<repository name>:<tag> .` command to start building the image, as shown in this following example:


  • Confirm that building image is success. Run `docker images` command to ensure that the image is created.


Run & Test the Image

  • Now, we will run a container from the created image through running `docker run` command.


  • Then followed by running `docker ps` command to confirm that the container is up and still running.


  • Now, we test the Redis container through doing CRUD against it, from the connected Desktop PC. We are going to use `redis-cli` for doing these query operations. If you don’t have the cli tool in your machine, run `sudo apt-get install redis-tools -y` command to install it.
  • To connect on the Raspi which runs the redis container, run `redis-cli -h <rpi’s hostname or ip address` command. Confirm that no error message shown and the terminal enters Redis-Cli prompt
  • On the redis-cli prompt, run a query to store STRING data (e.g. set app.version 1.0). Confirm that the query returns ‘OK’. Next, run a query to get the STRING data back. Confirm that the query return correct result.


Push the image to Docker Hub

In this point, we have created Rpi-Redis docker image that is ready to be used by our Raspberry Pis. However, we would like to have our future Raspberry Pis able to pull it from one common place. In this case, we are going to push our Rpi-Redis image into Docker Hub repository.

  • Before we could push our image into Docker Hub, ensure that we have created Docker Hub account and then login into Docker Hub in the terminal, through invoking this command `docker login –username <your docker hub account’s username> –password <your docker hub’s password>`.


  • Finally, we run the `docker push <dockerhub user id>/<repository name>:<tag>` to push the image into Docker hub.


  • We also should see our Rpi-Redis docker image, appears on Docker Hub’s Repository page now.



Where do we go next from here

Creating Rpi-Redis image is just one step from several steps required for seeing the Docker Swarm sample showing some actions. My next research will be understanding the concept of Service Discovery in a Docker Swarm and its benefits, through using Consul. Then, I have a plan to create my own Consul Docker Image targeted for both x86 & ARM (Rpi) platforms and then put this piece into the Swarm to get it working altogether.


HOW TO – Install Facebook’s Watchman on Linux Ubuntu 16.04 LTS

Facebook’s watchman. Initially, I do not have any particular needs with this software. However, I got issues when I tried to run a sample mobile app, created in Expo XDE ( By looking at the error logs shown on Expo XDE, I noticed that Expo XDE runs watchman, before it builds the project and this process was ended as failing.
Armed with this error logs, I decided to install watchman and the efforts were not simply as running a sudo apt-get install command in one go. watchmancan be installed through pulling its source code, build the source code and install it. It took me some time to figure out the details of dealing with these chores and be able to install it properly.
I created this article as a reference for my future’s needs and to anyone who shared same problem as mine when running Expo XDE or React Native development tool.
  1. Install GNU M4: sudo apt-get install m4
  2. Install automake & autotools-dev: sudo apt-get install autotools-dev automake
  3. Install libssl-dev: sudo apt-get update && sudo apt-get install libssl-dev
  4. Ensure that checkinstall has been installed: sudo apt-get install checkinstall
  5. Pull watchman’s repository and then move into watchman’s source code directory: git clone && cd watchman
  6. Run these commands to build the source code: ./ && ./configure && make
  7. Package the binaries as a debian package by running this command: sudo checkinstall . Follow the instructions on the screen until finished and confirm that the debian package is created ( the .deb file)
  8. Install the .deb package using either GDebi Package installer or ubuntu’s default software installer.

By following these steps guide, the watchman should be installed in your ubuntu machine , if there were no errors displayed on any of these steps.

HOW TO – Setup your first Go Lang’s Workspace and run your 1st `Hello World` in your Linux box

This article covers brief steps of how to install, setup necessary tools which could help you to write & run your 1st Go Lang program, in your Linux machine.

Download & Installing Go

  • Browse to and click the download link of Go Lang’s Linux Binary
Go Lang’s download page
  • Once the download process is completed, open your terminal window, go to the directory where contains the downloaded go lang’s binary file (e.g. ~/Downloads)


  • Run tar -xvf command to extract extract the content of go lang binary file.
Running tar -xvf command for extracting go lang binary file.
  • Move the extracted folder go into /opt folder.


  • Go to home directory then edit .profile file in there. In case you have not created it yet, create a new one. Open the file in text editor such as vim, and write down a line for declaring a new environment variable GOROOT which points to the location of our moved go binary files, the /opt/go directory. Then, add a line which export & concat the $GOROOT/bin path with PATH variable so that we could run go in any directories.


  • Save the changes in .profilefile then run source .profile command in terminal to get our new changes take effect immediately in our environment.By the next time you login or booting into your linux box using your current login account, you should be able to run go command from any directories.


Setup your 1st Go Lang workspace

Workspace in Go Lang, is a directory which contains source code of our go lang application & library projects, 3rd party go dependencies & binary files of our compiled go lang projects. Below are steps of how to create it:

  • Create a new directory somewhere in your home directory (e.g. ~/projects/golang). This directory will be the root of our golang’s workspace directory. Within the directory, create 3 new sub directories with these following names: bin, pkg, and src


  • Going into src folder, we’ll create a new subdirectory which represents our source control provider such as Then, we are going to the created new subdirectory, and create another new subdirectory. We name the new subdirectory same as  our source control provider account’s name (e.g. WendySanarwanto).


  • Go back to home directory, edit .profile file again in a text editor.  Add a new entry which export GOPATH environment variable. Ensure that the GOPATH variable points to the path of our workspace directory (e.g. ~/Documents/projects/golang ). Save changes and re-run source .profile command to force the changes taking effect in your environment immediately.



Creating your 1st “Hello World” Go Lang project

From here, we have already created our initial go lang workspace. Now, we are ready to create our 1st “Hello World” Go Lang project.

  • Go to the workspace’s source code directory then create a new directory (e.g. ~/Documents/projects/golang/src/ )
  • Create a new .go file (e.g. hello.go ). Open the file using code editor such as Visual Studio Code.
  • Inside opened the blank .go file, we’ll write our 1st hello world in go lang as follow:


  • We’ll go back to terminal, and run go install command to compile the program. The compiled binary file will be put under $GOPATH/bin directory.


  • Since we have exported the $GOPATH/bin as a part of $PATH variable, we should be able to run the program through running hello-golang command (name of your compiled go lang program).




At this point, we have already setup Go Lang workspace in our linux box. The workspace is a single location of where we will put files , directories of our current & future go lang source code projects, dependencies & compiled projects inside.
We also has setup GOPATH & GOROOT environment variables, integrate them with the PATH variable. This would make our efforts easier when we want to go into the workspace’s location, execute compiled binaries or just running go command from any directories.


Serverles AWS Lambda – Part 2: Retrieve data from AWS DynamoDB

In this past article , we learned how to create our 1st AWS Lambda service through using Serverless framework. Our current Function in the project, currently exposing GET HTTP verb and when it is invoked, it returns a list of harcoded blog objects. In this arcitle, we are going to extends its capability, by refactoring this part, to not returning hardcoded array. Instead, we are going to return an array of objects where are stored in AWS Data Storage service.

DynamoDB vs SimpleDB vs RDS (Relational Database Service)

AWS offers 3 Database Services to customers as follows: DynamoDB (a NoSQL database solution) , RDS ( Relational Database service, hosted & handled by AWS) and SimpleDB ( a similar NoSQL database to DynamoDB, yet AWS seems `hide` it from customers, but it’s accessible ). Among of these 3 options, I rule out RDS, because its pricing is the most expensive compared to the others (

So we have SimpleDB vs DynamoDB now. DynamoDB is popular and widely used. However, in terms of query speed & price, SimpleDB is more attractive compared to DynamoDB on a certain case. I am tempted to choose SimpleDB over DynamoDB, but since in this sample, we are going to build the backend API for a kind of blog application, DynamoDB is mentioned as suitable choice for this case. However, in future articles, I will cover the SimpleDB version of this sample, because it’s still interesting to me.


Initialise blog table on AWS DynamoDB

  • Open Blog project’s serverless.yml file and then add these resources entries.
Add resources block in serveless.yml

The resources section we added in serverless.yml file in there, is a way of telling serverless for creating a new DynamoDB table on AWS. In the section we define the table’s name, a String typed Attribute that we defined as the Primary key for this table and initial Read & Write Capacity units of the table. As for other attributes, we will add them when we are going to create records later.

  • Once we have saved the changes in serverless.yml file, let’s go back to command terminal and invoke serverless deploy command for deploying the service. You may want to remove prior deployed service by running serverless remove command 1st.
Running serverless deploy command
  • Now we are going to check the created table and add records on it. Let’s login into AWS Web Console using your Account and look for DynamoDB home page.
Accessing DynamoDB’s Home Page
  • Go to DynamoDB Tables’s page. Confirm that you notice the Blogs table appear on screen. Select it and then click Items tab.


  • On the Items tab view, click Create item button and confirm that a modal dialog as in this following screenshot appears.
Access DynamoDB Create Item page
DynamoDB Create Item Modal
  • Through accessing the popup menu, add 3 more attributes (columns) and fill them with strings as their values.
Add more attributes with values
  • As for the id, we are going to assign it with UUID. To generate the UUID, we can use available tool such as in this site.
Assign UUID on id attribute
  • Repeat prior steps to add as many records as you want. You could go to Actions menu -> click Duplicate button for doing this.
Created Blog records

Setup AWS SDK for Node.JS

AWS provides SDK for developer which contains various APIs for accessing their services, including DynamoDB. We need to use the SDK for Node.JS in our Lambda function for accessing the DynamoDB Table we created in previously.

  • In the lambda project, ensure that it has package.json file. Otherwise, we will need to create it through running npm init command.
package.json file
running npm init command for creating package.json file, in proper way
  • Once we have finished prior step, we’ll install the AWS SDK inside our project through running npm install aws-sdk –save command. The extra –save argument on the command will add an entry in the package.json file, to ensure that when CloudFormation building our lambda, and run npm install command, npm would install the AWS SDK library into this project.
Installing AWS SDK
AWS SDK entry in dependencies section of package.json file
  • Ensure that you have setup your IAM account’s Access & Secret Keys. If you are not sure with this, open ~/.aws/credentials file and ensure that there are lines which define these keys entries. If not, follow guide in this site.
content of ~/.aws/credential file

Refactor Blogs resource’s GET verb – Phase 1

As we have done in prior article, we created retrieveBlogs helper method which returns a list of hardcoded blog objects. We are going to create a new method for replacing this retrieveBlog method. Here is the steps of how we are going to do this.

  • Remove the retrieveBlogs method from handler JS file. Move the removed method’s hardcoded lines into a class, we name it as DynamoDbDataService class. Change the handler JS file to instantiate the new class and call its getAll method.
Moving retrieveBlogs method into the new DynamoDbDataService class
  • Then, in the Handler JS file, we change the code by importing our new class, instantiate & initialise the new class and call its getAll method for retrieving the Blogs items from AWS DynamoDB.
Changed Handler JS file to use our new DynamoDbDataService class
  • Before we move to next refactoring phase, we’ll invoke serverless invoke local command 1st to ensure that there are no errors in our new code.
Test our refactored lambda function in our local machine

Implement calls to AWS DynamoDB using AWS-SDK

  • Let’s go back to DynamoDbDataService class. On the early lines (below ‘use strict’ line), we’ll put a statement to import AWS-SDK library.
Import AWS SDK
  • Moving to the constructor part, we write lines to initialise the AWS’s configuration property. We want to tell AWS SDK which AWS Region that our Lambda Service is deployed to. We could hardcode it to a specific region such as us-east-1, ap-southeast-1 , etc. But, we won’t do this way. Because we don’t want to change this line in the future if we want to deploy the Service to different region.
    AWS has provided an environment variable AWS_DEFAULT_REGION which filled with correct AWS Region of where our Lambda service deployed to, in the cloud environment. Therefore, we are going to get the AWS Region from AWS_DEFAULT_REGION environment variable, instead of hardcoded it.
Initialise AWS SDK & Region inside constructor
  • Removed all hardcoded lines inside getAll’s returned Promise object. Then, we’ll start with instantiating AWS.DynamoDB.DocumentClient type. This type expose methods which one of them can be used for pulling data from a DynamoDB table – the scan method. scan method takes TableName as a required parameter. We build the scan’s parameters which consist of TableName & Limit(define maximum number of returned records). We assign the tableName & numberOfItems property’s values into these parameters. Next, we call the documentClient instance’s scan method and pass the params as its argument. When the call is finished, the callback in the method’s 2nd argument is triggered. Inside the callback method, we check whether the calling process is ended as giving error or result. Should it is ended as error (err is not null), we call the promise’s reject method and takes the err object as its argument. Otherwise, we call resolve method and takes the result (data) as its argument.
Re-implement the getAll method

Testing the changes in local development machine

  • At this point, we should be ready to test our changes. Before we deploy our code to AWS, it’s good thing to do if we test it first in our local machine. As usual, to do this, we will invoke this command to invoke the service, AWS_DEFAULT_REGION=<AWS Region of where your lambda sits on> serverless invoke local -f <lambda function’s name> . In our case, we will invoke the command as AWS_DEFAULT_REGION=us-east-1 serverless invoke local -f blogsFetch
  • Invoke the command and confirm that we got the result with status code is 200 and the body contains stringified retrieved data, came from our AWS DynamoDB’s table.
Invoke changed Lambda function in local against AWS DynamoDB

Deploy to AWS and testing it on API Gateway test page

  • All should be still fine. It’s time for uploading our changed Lambda function to AWS. This time, instead of running serverless deploy -s dev -r <aws-region> command, we call this command for deploying only the Lambda code only (node.js code): serverless deploy -f <function’s name> -s <stage> -r <region>.We use this command because we don’t want to rebuild other resources such as the DynamoDB when deploying our updated Lambda code.
Deploy the updated lambda using -f argument (deploy per function)
  • Although it was working fine when we tested our lambda in local environment, we still need to test the deployed Lambda function. This time, instead of using Postman for testing our API, we will do it in different way. We are going to test it through using API Gateway’s Test Page. Go back to our AWS Console page then go to AWS API Gateway service page. On the page, click Resources link in left menu. Then on Resources pane, click the GET verb. On right pane, click a Blinking Dot with label TEST link. This will bring the API Test page when we clicked it.
Accessing API Test Page
  • Confirm that the right pane is refreshed and display the test page. On the page, there is blue coloured with thunder icon button, the test button. Click this button. This will invoke our blogs/fetch API.


  • When the call is finished, we received “Internal server error”. We will cover how we are going to fixing these issues.
“Internal server error” when testing ther API GET verb
  • On the displayed Logs, we could not find any useful information which explains why the error happened. To look for what was going on and the cause of this error, we could see it on CloudWatch Logs window. Open the Cloud Watch page then click Logs item on left menu. Confirm that the right pane refresh and display a list of Log Group item. Click the item whose name is matched to our blogs/lambda service.
CloudWatch page with displayed Log Groups list
  • When we clicked one of displayed Log Groups item, the right pane is refreshed again and displays a list of Log Streams. Click the one with latest Last Event Time.
CloudWatch page with displayed Log Streams list
  • In the next page, expand the item that looks like explain this error. Notice the errorMessage & errorType, it seems that we have not authorise the Lambda Function to do Scan operation against the designated DynanmoDB table.


Fixing the unauthorised access error

  • To fix the previous error, we need to give authorisation access to our Lambda function for performing scan operation against the DynamoDB Table. The way to do this is by adding DynamoDBIamPolicy entry in the serverless.yml file, under Resources entry as follow:
Updated serverless.yml with DynamoDBIamPolicy entry
  • Save the changed serverless.yml file and then re-run the serverless deploy -f <function’s name> -s <stage> -r <region> command again.

Retesting the Lambda on API Gateway test page

  • Once we have redployed our lambda function, go back to AWS Web Console’s API Gateway Test page of our deployed Lambda function. Then, press the Test button. Noticed that we do not receive error anymore. Instead, we should see records from DynamoDB Blogs table are retrieved and displayed as follow.
Returned response from testing the GET API


At the end of this article, we have learned a number of key things to get our Lambda function able to pull data from AWS DynamoDB. First, we define the DynamoDB table we want to create through adding Resources section in serverless.yml file.  Once we redeployed our Lambda to AWS, AWS CloudFormation will create the DynamoDB Table, beside our Lambda function & its API Gateway endpoint. Then, we filled the created table with several items through AWS DynamoDB Web Console.

On the Lambda function’s handler code, we structured our code by moving retrieveBlogs method into an ES6 class (the DynamoDbDataService class) and wrap the method’s body with ES6 Promise (because we want the records retrieval to be an asynchronous process). Then, we replaced the hardcoded lines with logic for Initialising AWS SDK’s Document Client class and calling its scan method for retrieving records from Blogs table.

Aside from these, we learned how to get the detailed error log in case a request to our deployed Lambda’s endpoints return “Internal Server Error”,through looking at AWS CloudWatch web page. We also learned that the API Gateway page has a section that allow us to Test our Lambda’s HTTP Endpoint.The source code of this article can be found in this link.

In the next article, we will add more verbs on the Lambda function so that it would provide complete CRUD endpoints.

Serverless AWS Lambda – Part 1: A Quickstart for Beginners

This article covers steps for creating your first AWS Lambda service & functions through using Serveless framework. Before following these steps, ensure that you have already had AWS Account.


  • Ensure that you have installed latest LTS version of Node.JS. Visit this link to finding out how to install it in your machine:
  • Ensure that you have installed Serverless framework. If not, run this command for installing it:
    sudo npm install serverless -g

    Then, run this command to check whether serverless has been installed successfully or not:

    serverless -v
  • Create a new or reuse existing IAM User account. Ensure that you have given AdministratorAccess to the user account. If you are not sure how to do this, follow the guidance of how to do it in this document
  • Upon created a new IAM User account, take note the displayed API Key & Secret Key of the new IAM Account.
  • Follow the guidance in this document to configure the aws credentials (API Key & Secret Key) that you have noted in prior step. Serverless framework need this information so that it could deploy.

Create your first service:

  • Once we have completed all of required pre-requisites, create a new folder, go into the new folder then run this command to begin creating your 1st service:
    serverless create --template  --path

    . Example:

    serverless create --template aws-nodejs --path blog

    Creating 1st Service
  • When creating a new service is finished, we will see file structure in the project folder, as shown in this following screenshot:
    • serverless.yml – a YAML file where we will define configurations for our service, such as AWS Resources (S3, DynamoDB, etc), Region, Nodejs Runtime, we want to use and also our service’s functions configurations.
    • handler.js – Initial Javascript file , created by serverless, that is supposed to be the place where we will write our function’s logic. Rename the file’s name with name of entity that our function interacts with (e.g. blog, product, task, etc).
Initial Project’s Structure
  • Open the serverless.yml file and edit these Configuration sections: lambda function’s name, handler method’s name, associated HTTP path & verb.


serverless.yml – Configure service’s function
  • Open the handler javascript file. Let’s write code inside the exported function whose the logic is simple – just returning an array of JSON objects
handler node.js – It retrieves the data, wrap the result in response’s body and return
handler node.js code – helper method that is supposed for retrieving the data from storage
  • Before we deploy the lambda function, let’s invoke it in our local machine through executing this command:
    serverless invoke local --function
     e.g. serverless invoke local --function blogs 

    Ensure that no error happens and we notice correct result is printed on terminal.


Deploy the service to AWS Lambda

We have implemented simple logic inside Lambda service’s handler and then invoke it locally using serverless invoke command. Now, we need to deploy our lambda through running this command in terminal :

serverless deploy --stage <dev, uat, production> --region .


serverless deploy --stage dev --region ap-shouteast-1


Deploying Lambda to AWS through calling serverless deploy command
When deployment is successful, we should get the URL Endpoint of our deployed service and there should be no error message appear. If we don’t see the URL Endpoint, there should be a typo inside serverless.yaml (check the events section , if you type it as event, this issue occurs).

This is result that you should get when we browse the endpoint or invoke it using Postman

Invoke the Lambda in Postman

How the Serverless deploys our Lambda function to AWS

When we invoked serverless deploy command, serverless zipped our function file(s) and also created a file for configuring AWS CloudFormation stack setting (cloud formation template). Serverless also created a new AWS S3 bucket using our AWS API Key & Secret Key, then upload the zip file & cloudformation setting file into the created AWS S3 Bucket.

Lambda files uploaded by Serverless in AWS S3 Bucket
Once the file uploading process is done, Serverless manage the creation of our Lambda function & its API Gateway Endpoint through AWS CloudFormation service. This is done based on the uploaded cloud formation template file. Upon finished the deployment, you could see the created AWS Resources which build up your deployed service through browsing the Lambda , CloudFormation, API Gateway section pages, on your AWS web console.

Removing Deployed Service

In case you need to destroy your deployed lambda service, the common way to do this is through destroying the resources that built your service, through AWS Web console page and then do these procedures: Open S3 page and destroy the Bucket which build the service, Destroy CloudFormation stack, Destroy the Lambda & then the related API Gateway. Serverless provides a quickest way for destroying our service along with its AWS resources. We can do this through invoking serverless remove command, inside the serverless project folder.

Removing Lambda function and its claimed AWS Resource


Serverless has simplified the efforts of writing AWS Lambda Function, deploying & hosting the function as an API Gateway resource endpoint. By hosting our node.js-based API on AWS Lambda, we do not need to setup an EC2 instance or other kind of virtual private server just for hosting our code. Through using Serverless & AWS Lambda, we shift this responsibility to AWS and thus, free us from responsibility of setup our own server. In the future article, we will cover steps of how to integrate our Lambda service with AWS Data storage services such as SimpleDB or DynamoDB.

Host an Ionic app into AWS S3 Bucket using Serverless framework

When I was working on a Mobile App’s front end development project, we were asked by our customer to deploy the ionic app into a public server ( as a Web SPA ), so that they could access & playing around with it immediately, in their browsers. Ionic framework comes with a built in web server which allow you to run the app as a Web application ( through invoking `ionic serve` command). Then, this part was not a problem to us. However, we should figure out where & how we were going to deploy the ionic app to. That time, we suggested our customer, to deploy the ionic app into our AWS EC2 Instance/Elasticbeanstalk or into their own on premise server.
Fast forward back to today, we have more options which allow us to deploy an Ionic app (also other web SPA like angular JS or React) into AWS S3 bucket and host the app in there. This is possible since AWS S3 bucket has capability to host static web page. Then, we have several tools to help us doing this: through AWS S3 Web Console, AWS SDK CLI tool or Serverless framework with S3 Client plugin.
In the time I wrote this article, I am working with Backend team which use Serverless framework in our project. Based on this situation, I am curious with this tool’s ability to deploy front end web Single Page Application (SPA) like Ionic, into an S3 bucket. In the next sections, we will cover steps of how to do this in Serverless framework’s way.



  • Ensure that you have AWS account.
  • Ensure that you have installed Node.js in your machine. Refer to this link if you have not installed it yet.
  • Ensure that you have installed Ionic framework, Android SDK and/or XCode in your machine. Refers to this link if you have not installed Ionic yet.
  • Ensure that you have installed stable version of Serverless framework (version 0.5.6). If not, run this command to install it: npm install -g serverless@^0.5.6
  • Login into your AWS Management Console, create a new IAM account and attach AdministratorAccess policy into the created IAM Account.


  • Ensure that you have written the created IAM account’s ACCESS KEY ID & Secret access key. We are going to use these in later steps.


Prepare a Serverless project

  • In a terminal box, run sls project create command. Notice that running this command creates a new Cloud Formation’s template in your AWS account for specified region (in this sample, we choose ap-northeast-1 AWS region) and also created a new folder in your current directory.



  • Change directory to the folder created by running command in prior step, then running this command: npm install serverless-client-s3 --save to install serverless-client-s3 plugin
  • Create folder client/dist. In mac/linux command shell, we can do this by running this command mkdir -p client/dist.
  • Copy all files inside your Ionic project’s /www folder into client/dist folder , created in prior step.

  • In the serverless project’s root directory, edit & update the s-project.json file by adding these following changes:


Deploy the Ionic App into AWS S3 Bucket

  • In the serverless project’s root directory, run sls client deploy -s stageName -r regionName command to deploy the Ionic app into AWS S3 bucket for specific stage & region. Example: sls client deploy -s dev -r ap-northeast-1 to deploy the app into Tokyo data center and labeled as dev stage. This might take a while to finish.

  • Once it’s done, notice your AWS S3 console. There is a new bucket created in the list where its name is matched with the bucketname setting we define in our s-project.json file. View its content by clicking the folder’s link. Notice that the bucket contains all of files of our Ionic app, we copied in client/dist folder in prior step.

  • Click Properties button on the page, and click ‘Enable Web Hosting’ accordion button to display url link of the deployed web application.
  • Click the url link to display our Ionic app. Ensure that no error happens and our Ionic app’s landing page is displayed successfully.


The Flaw

The latest version of serverless-s3-client plugin (version 2.0) deploys the web app to s3 bucket with no restriction access policy, which mean it is accessible to anyone / public. While this is not a problem to us (the developers), to our customers, this is mostly undesired. Because they might not want people outside the developer team and themselves, to look at our current works on the deployed Ionic app.

At the moment, restricting accesses to the S3 bucket, require us to add access Policy on it manually through S3 web console. There are couple of guys whom created Pull Request on serverless-s3-client plugin’s github page which allow us to define S3 policy bucket inside s-project.json file. Once the serverless maintainers reviewed & merge these Pull Request, I will update this article with steps to add the s3 bucket’s access restriction policy.


Deploying Sails.js Web Application on AWS EC2 Instance

Sails.js is a Web MVC Framework, built on the top of Node.js platfom, that is interesting to me. Beside the framework leverages MVC pattern and offers blazing fast performance during runtime (thanks to Node.js), it also came in with a built in HTTP server. When i am going to run my web application, I do not need to compile, package & deploy my code into a separated web server. Instead, it just requires me to invoke a line of command to get my web application up & running:

 sails lift MyAwesomeWebApplication 

Also, it just need me to press CTL+C keys in the terminal, to stop the running web application. This is a similar feature that exists in Play.

In the meantime, I was thinking about what if I deploy & run a sails web application on a cloud environment, let’s say, Amazon Web Service. So, i made the first attempt by deploying a simple sails web app on AWS through Elastic Beanstalk (EB). The result is my web app’s homepage did not show in my browser. Instead, it displays EB App’s default homepage.

Then, I took the other route. I create an EC2 Instance (It’s a virtual private server) using AMI (Amazon Machine Image) with pre-installed Ubuntu 14.04 OS. I installed required softwares in the created EC2 instance then lift my sails web app on it. Then, check it using my browser & it confirmed me that my sails web app was up & running.

In this article, I would like to share you the steps that I did to get my sails app running in a Ubuntu AWS EC2 instance. And, here they are:

Create a new AWS Instance using Ubuntu 64 AMI

  1. Browse to AWS console at & click EC2 link.aws_console
  2. On the EC2 Dashboard, click IMAGES->AMIs menu link.
  3. On the Filter menu bar, modify the options to Public images | 64-bit images | Ubuntu.
  4. On the returned filtered result, tick a desired AMI (e.g. ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server).ubuntu_ami
  5. Click [Launch] button.
  6. On the “Step 2: Choose an Instance Type” page, tick a desired EC2 Instance Type (e.g. Micro Instances) then click [Next: Configure Instance Details] button.choose_instance_type
  7. On the “Step 3: Configure Instance Details” page, leave the default settings and click [Next: Add Storage] button.
  8. On the “Step 4: Add Storage” page, adjust the Root’s storage size or just leave the default settings then click [Next: Tag Instance] button.add_storage
  9. On the “Step 5: Tag Instance” page, leave the default settings and click [Next: Configure Security Group] button.
  10. On the “Step 6: Configure Security Group” page:
    • Select “Create a new security group” option.
    • Enter name & description for the new security group.
    • Click [Add Rule] button & on the new entry, enter: Type = Custom TCP Rule, Protocol = TCP, Port Range = 1337, Source = Anywhere.
    • Click [Review and Launch] button.configure_security_group
  11. On the “Step 7: Review Instance Launch”, click [Launch] button.
  12. On the displayed “Select an existing key pair of create a new key pair” dialog, select “Create a new key pair” on the combo field, enter Key pair name then click [Download Key Pair]  & [Launch Instances] buttons.
  13. Save the downloaded .pem file into somewhere within your home directory & restrict its access by running this command in terminal:
    chmod 400 yourdownloadedkeypair.pem
  14. Go back to browser, click [View Instance] button. Notice that the browser redirects to Instances dashboard page and the new AMI Instance is shown in the Instances list. Give the new Instance a name if you like ( by clicking the new instance’s empty Name cell and type the name on it).  Make notes on the new Instance’s Public IP or Public DNS fields.instances_dashboard

Connecting to the created AMI Instance using SSH

  1. On the Instances Dashboard page, click [Connect] button. A dialog would appear, showing 2 options for connecting to the Instance. Select “A standalone SSH client”, block and copy the command written under ‘Example’ section.connect_to_instance
  2. Open the command line Terminal box, move to the directory that has the downloaded .pem file and run the command written in the earlier instructions dialog.
    ssh -i downloaded_keypair.pem ubuntu@new_instance_public_ip_or_dns


  3. Confirm that you have logged in successfully.ssh_connected
  4. Set the root’s password by running these commands:
    sudo su


Setup required softwares on the created AMI Instance (as root)

  1. In the SSH terminal connected to the new Instance, run these following commands to update the new Instance’s software repositories:
    apt-get upgrade && apt-get dist-upgrade && apt-get update && apt-get autoclean


  2. Run these commands to set up the git client:
    apt-get install build-essential git



  3. Create a new directory & pull the latest Node.js source code in this directory by running this command ( using git client ):
    git clone


  4. Change directory to the cloned Node.js source directory, then run these commands to compile & install the Node.js:
    sudo ./configure && make && make install
  5. Confirm that the compilation process is finished successfully.check_node_version
  6. Install sail.js web MVC framework by running this command:
    npm -g install sails
  7. Confirm that the installation is finished successfullycheck_sails_version

Deploy a sails.js app into the created Instance

  1. Ensure that you have put the sails.js app source code in your git account (e.g. github, bitbucket, etc ).
  2. On the created Instance, create a new directory and do git clone the sails.js app source code into this directory.clone_app_source
  3. Change directory to the cloned source code’s directory and run this command to install node module dependencies referenced by your sails.js app
    npm install


  4. Run this command to lift the sails.js app online on the created instance:
    sails lift
  5. Confirm that the sails.js app is lifted successfullysails_lift
  6. Go back to your internet browser and browse to your instance’s public IP address , port 1337.sails_app_url
  7. Confirm that the lifted sails.js app’s home page is displayedlifted_sails_app

It’s alive now ! But, wait..

When I close my current SSH session that was connected to the EC2 instance & refreshed the sails app’s page in my browser, I noticed that it returns 404 error. My sails web app was offline. Apparently, each processes that have been started during SSH session within the EC2 instance, would be shutdown when the SSH connection is closed. Somehow, I need to prevent the running sails app from being closed even when the SSH session is ended.

Fortunately, the solution for this is already suggested in the documentation of sails.js. The document suggests us to install & start a sails.js app by using forever. Forever prevents any running scripts from being closed during SSH session by running them as a Daemon (*nix service). Then, I tried the solution and it worked well. I would explain the steps of how to forever my sails app in EC2, in the next section,

Run the deployed app as a running daemon in the EC 2 Instance

  1. Install forever globally:
    npm -g install forever
  2. In the terminal connected to the EC2 Instance, change directory to the sails.js app’s root folder then run this command:
    forever start -ae errors.log app.js --dev --port 1337

    OR, run this command if you wish to run the production version:

    forever start -ae errors.log app.js --prod --port 80
  3. If you write your controllers as coffee script files, open the errors.log file. Notice that there is error message written in it ( This means the sails.js app is failing to be lifted by running prior command. This is a known issue in Sails.js version 0.9.16. This issue has been raised to balderdashy and it can be seen in this link, along with the temporary workaround as well :
  4. Logout or disconnect from the EC2 instance’s SSH session and then browse to your lifted sails.js app’s url. Confirm that your lifted app is still up & running now.
The previous section marks the end of this article. I hope this ‘how-to’ guide would help you deploying your sails.js web app on your AWS account. Happy sailing in your AWS cloud.

Creating a Phonegap-Android Application Development Project on Intellij IDEA 12

Creating an Android mobile application could be a tedious one when you need to build rich UI elements on your application. This could be a real problem if your UI designers have adequate or good HTML & CSS skillsets but have little to none knowledge of working with Android XML pages. Another problem arise when there is a requirement to ship your mobile app to support other mobile platforms such as iPhone, Win RT, Blackberry, etc beside on the Android platform. You would need to spend more time, resources & efforts  for Designing, Developing, Testing & Shipping your app across multiple mobile platforms which could hurt your budget.

There is a workaround to solve these. Thanks to people whom were involved in Apache Cordova project, a library named “Phonegap” was born to the rescue. Phonegap is a java library that enable Android runtime to load & display HTML pages ( along with their CSS styles ) and also enable the runtime executing Javascript files as well in an Android application. Due to this feature, UI Designer is freed from working with tedious Android XML pages and able to use their current HTML+CSS+JS skillsets for developing the app’s UI elements similar to pages in web application. Since the app is mainly built on top of HTML 5+CSS +JS, a mobile application which is built on top of Phonegap is also runable on other mobile platforms, such as iPhone, with minimum to none modifications to the original code.

So how would it be look like in a simple Hello World application ? I would show you the steps in this article on how to do it in my favourite Java IDE, the Intellij IDEA 12.


  • You have setup latest updates of JDK 6 or 7 in your machine. If you are a Linux Ubuntu’s user and have not done it yet in your environment, this article might be useful.
  • Ensure that you have setup Android SDK properly in your machine.
  • If you use Intellij IDEA like me, ensure that you have setup JDK & Android SDK settings in your IDEA.
  • Ensure that you have downloaded latest Phonegap’s library (By the time I write this post, the latest version is 2.7.0).

Enter the Steps:

    • Create a new Empty Project in the Intellij IDEA 12.
    • Add a new Android Application Module.
    • In the Android Module structure, create a new folder under ‘assets’ folder & name the new folder as ‘www’.
    • Copy Phonegap’s ‘cordova-x.x.x.js’ file into the ‘www’ folder.
    • Copy ‘cordova-x.x.x.jar’ into ‘libs’ folder.

    • Right click the ‘cordova-x.x.x.jar’ node & click ‘Add as library …’ option.
    • On the Create Library dialog, ensure that Name= cordova-x.x.x, Level = Project LIbrary, Add to Module = the Android Application Module then click [Ok].

    • Copy the Phonegap’s ‘xml’ folder where is located in <your phonegap root folder>\lib\android  into the ‘res’ folder.
    • Add a new HTML5 file into ‘www’ folder. This is the HTML file which is designated for the application’s main page.
    • Add a script line in the Head tag for referring to Phonegap’s javascript file.
      <!DOCTYPE html>
       <title>Demo Phonegap</title>
       <script type="text/javascript" charset="utf-8" src="cordova-2.7.0.js">
       <h2>Hello Android</h2>
    • Initially, our HomeActivity class is extended from Android’s Activity class. We need to modify it so that our HomeActivity class would be able to load & display our index.html page as the app’s main page. First, we’ll modify the class so that it is inherited from Phonegap’s DroidGap class. Secondly, we get rid the second line inside OnCreate method and replace it with a code that calls DroidGap’s loadUrl method. This is the code that is exactly doing the magic.
      package Demo.Phonegap.Application;
      import android.os.Bundle;
      import org.apache.cordova.DroidGap;
      public class HomeActivity extends DroidGap {
       * Called when the activity is first created.
       public void onCreate(Bundle savedInstanceState) {
    • Finally, we’ll modify the ‘AndroidManifest.xml’ file as in the following code snippet. Be advised, you might need to adjust the android:minSdkVersion option so that it match with the android version that has been used by your AVD or Physical Android Devices.
      <?xml version="1.0" encoding="utf-8"?>
      <manifest xmlns:android=""
       <uses-sdk android:minSdkVersion="16"/>
       android:anyDensity="true" />
       <uses-permission android:name="android.permission.VIBRATE" />
       <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
       <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
       <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" />
       <uses-permission android:name="android.permission.READ_PHONE_STATE" />
       <uses-permission android:name="android.permission.INTERNET" />
       <uses-permission android:name="android.permission.RECEIVE_SMS" />
       <uses-permission android:name="android.permission.RECORD_AUDIO" />
       <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
       <uses-permission android:name="android.permission.READ_CONTACTS" />
       <uses-permission android:name="android.permission.WRITE_CONTACTS" />
       <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
       <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
       <uses-permission android:name="android.permission.GET_ACCOUNTS" />
       <uses-permission android:name="android.permission.BROADCAST_STICKY" />
       <application android:label="@string/app_name" android:icon="@drawable/ic_launcher">
       <activity android:name="HomeActivity"
       <action android:name="android.intent.action.MAIN"/>
       <category android:name="android.intent.category.LAUNCHER"/>
    • Fire up your Android Virtual Device or connect your Android device into your development machine then build & run the project. You will see this if everything is alright.

Where do we go next from here ?

Of course, you could tell your Web Designers to get back working on the Rich UI elements you wish to put in your super-awesome mobile app, immediately 😀 However, there is a drawback for you as a Java developer if you are going on this way. You would need to use lot of JS when you are coding the UI Logics. Your Java skill would be shifted from being used for developing the UI logics ( Activities , events) to your app’s backend development ( This is true if you goes to n-tier with your mobile app and you still want to use Java for developing your app’s backend services ).
Another situation that you should consider is when you have no UI Designers in your team due to limited budget, for example. This should not stop you from using Phonegap. 3rd party vendors like Telerik or Community have created wonderful UI framework that would sit tight & nice with Phonegap, such as Telerik’s Kendo UI or Jquery Mobile. In the upcoming articles, i would present you about how to integrate these UI Frameworks with android & Phonegap through using a little bit advanced problem sample than a simple ‘Hello World’ app. Or extending them further by integrating cool JS framework like Durandaljs 😀 So, stay tuned, My Friends 🙂